This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2863 succeeded
Started2019-09-10 09:15
Elapsed34m14s
Revision
Buildergke-prow-ssd-pool-1a225945-fk19
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/244747e4-a6e8-4a2f-85da-7a8aa719506f/targets/test'}}
pod671fc433-d3ab-11e9-9d26-329cee23a2e0
resultstorehttps://source.cloud.google.com/results/invocations/244747e4-a6e8-4a2f-85da-7a8aa719506f/targets/test
infra-commit31b4bf7cd
pod671fc433-d3ab-11e9-9d26-329cee23a2e0
repok8s.io/kubernetes
repo-commit6348200c92dec8848e55552f3e8039b3da95bd91
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeProvision 27s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeProvision$
=== RUN   TestVolumeProvision
W0910 09:48:12.034563  111732 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0910 09:48:12.034571  111732 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
I0910 09:48:12.035448  111732 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0910 09:48:12.035487  111732 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0910 09:48:12.035502  111732 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0910 09:48:12.035516  111732 master.go:259] Using reconciler: 
I0910 09:48:12.037620  111732 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.038028  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.038074  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.039945  111732 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0910 09:48:12.040007  111732 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.040065  111732 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0910 09:48:12.040496  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.040532  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.041472  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.041595  111732 store.go:1342] Monitoring events count at <storage-prefix>//events
I0910 09:48:12.041748  111732 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0910 09:48:12.041709  111732 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.042000  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.042058  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.042836  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.042915  111732 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0910 09:48:12.042965  111732 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.043007  111732 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0910 09:48:12.043374  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.043424  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.044266  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.044560  111732 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0910 09:48:12.044728  111732 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0910 09:48:12.044771  111732 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.044940  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.044967  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.045891  111732 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0910 09:48:12.046246  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.046047  111732 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0910 09:48:12.046684  111732 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.046967  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.047005  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.047384  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.047991  111732 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0910 09:48:12.048075  111732 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0910 09:48:12.048252  111732 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.048513  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.048549  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.049436  111732 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0910 09:48:12.049465  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.049536  111732 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0910 09:48:12.049757  111732 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.049963  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.049994  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.051224  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.051603  111732 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0910 09:48:12.051702  111732 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0910 09:48:12.051843  111732 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.052057  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.052090  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.053872  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.054005  111732 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0910 09:48:12.054133  111732 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0910 09:48:12.055214  111732 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.055432  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.055907  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.056031  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.056938  111732 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0910 09:48:12.056991  111732 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0910 09:48:12.057114  111732 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.058278  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.058315  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.058320  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.059422  111732 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0910 09:48:12.059458  111732 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0910 09:48:12.059760  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.059948  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.059983  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.060532  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.061129  111732 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0910 09:48:12.061286  111732 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0910 09:48:12.062023  111732 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.062351  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.062405  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.062462  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.063419  111732 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0910 09:48:12.063477  111732 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0910 09:48:12.063727  111732 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.063962  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.064055  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.064898  111732 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0910 09:48:12.064951  111732 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.065116  111732 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0910 09:48:12.065138  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.065231  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.065811  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.066220  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.066310  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.066471  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.067203  111732 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.067380  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.067414  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.068237  111732 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0910 09:48:12.068274  111732 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0910 09:48:12.068357  111732 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0910 09:48:12.069007  111732 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.069358  111732 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.069412  111732 watch_cache.go:405] Replace watchCache (rev: 58006) 
I0910 09:48:12.070531  111732 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.071348  111732 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.072037  111732 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.072757  111732 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.073188  111732 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.073327  111732 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.073611  111732 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.074031  111732 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.074499  111732 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.074719  111732 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.075641  111732 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.075927  111732 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.076561  111732 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.076813  111732 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.077372  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.077655  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.077874  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.078053  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.078321  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.078665  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.078860  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.079703  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.079969  111732 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.080849  111732 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.081648  111732 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.082020  111732 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.082356  111732 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.083121  111732 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.083465  111732 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.084179  111732 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.084960  111732 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.085521  111732 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.086299  111732 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.086535  111732 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.086644  111732 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0910 09:48:12.086715  111732 master.go:461] Enabling API group "authentication.k8s.io".
I0910 09:48:12.086731  111732 master.go:461] Enabling API group "authorization.k8s.io".
I0910 09:48:12.086961  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.087202  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.087236  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.088379  111732 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0910 09:48:12.088755  111732 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0910 09:48:12.089039  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.089299  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.089339  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.090431  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.090613  111732 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0910 09:48:12.090861  111732 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0910 09:48:12.090956  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.091221  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.091261  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.091996  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.092178  111732 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0910 09:48:12.092208  111732 master.go:461] Enabling API group "autoscaling".
I0910 09:48:12.092288  111732 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0910 09:48:12.092458  111732 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.092661  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.092709  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.093205  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.094311  111732 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0910 09:48:12.094528  111732 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.094727  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.094763  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.094855  111732 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0910 09:48:12.095881  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.096216  111732 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0910 09:48:12.096248  111732 master.go:461] Enabling API group "batch".
I0910 09:48:12.096258  111732 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0910 09:48:12.096403  111732 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.096556  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.096585  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.097487  111732 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0910 09:48:12.097513  111732 master.go:461] Enabling API group "certificates.k8s.io".
I0910 09:48:12.097553  111732 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0910 09:48:12.097662  111732 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.098072  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.098105  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.098331  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.098406  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.098874  111732 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0910 09:48:12.098965  111732 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0910 09:48:12.099037  111732 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.099245  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.099266  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.100372  111732 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0910 09:48:12.100401  111732 master.go:461] Enabling API group "coordination.k8s.io".
I0910 09:48:12.100497  111732 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0910 09:48:12.100528  111732 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0910 09:48:12.100706  111732 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.100822  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.100858  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.101554  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.101611  111732 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0910 09:48:12.101665  111732 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0910 09:48:12.101718  111732 master.go:461] Enabling API group "extensions".
I0910 09:48:12.101926  111732 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.102078  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.102109  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.102716  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.102932  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.103443  111732 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0910 09:48:12.103560  111732 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0910 09:48:12.103808  111732 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.104635  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.104676  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.104707  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.105602  111732 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0910 09:48:12.105638  111732 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0910 09:48:12.105639  111732 master.go:461] Enabling API group "networking.k8s.io".
I0910 09:48:12.105827  111732 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.105986  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.106392  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.106568  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.107332  111732 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0910 09:48:12.107360  111732 master.go:461] Enabling API group "node.k8s.io".
I0910 09:48:12.107453  111732 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0910 09:48:12.108300  111732 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.108438  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.108461  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.108490  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.109682  111732 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0910 09:48:12.109759  111732 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0910 09:48:12.110003  111732 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.110503  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.110544  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.111275  111732 watch_cache.go:405] Replace watchCache (rev: 58007) 
I0910 09:48:12.111754  111732 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0910 09:48:12.111782  111732 master.go:461] Enabling API group "policy".
I0910 09:48:12.111821  111732 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.111976  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.112007  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.112087  111732 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0910 09:48:12.113337  111732 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0910 09:48:12.113550  111732 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0910 09:48:12.113634  111732 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.113813  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.113831  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.114436  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.114600  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.115557  111732 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0910 09:48:12.115638  111732 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.115715  111732 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0910 09:48:12.115779  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.115802  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.116707  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.116773  111732 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0910 09:48:12.116799  111732 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0910 09:48:12.116976  111732 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.117146  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.117257  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.118270  111732 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0910 09:48:12.118339  111732 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.118381  111732 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0910 09:48:12.118930  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.119105  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.120544  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.120586  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.121828  111732 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0910 09:48:12.121935  111732 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0910 09:48:12.122263  111732 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.123281  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.123326  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.123734  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.124366  111732 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0910 09:48:12.124417  111732 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0910 09:48:12.124432  111732 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.125790  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.126183  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.127065  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.128257  111732 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0910 09:48:12.128445  111732 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.128622  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.128648  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.128619  111732 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0910 09:48:12.130267  111732 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0910 09:48:12.130322  111732 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0910 09:48:12.130382  111732 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0910 09:48:12.131338  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.131932  111732 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.132112  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.132135  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.132728  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.133305  111732 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0910 09:48:12.133364  111732 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0910 09:48:12.133543  111732 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.133800  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.133852  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.134366  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.134736  111732 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0910 09:48:12.135008  111732 master.go:461] Enabling API group "scheduling.k8s.io".
I0910 09:48:12.134918  111732 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0910 09:48:12.135504  111732 master.go:450] Skipping disabled API group "settings.k8s.io".
I0910 09:48:12.135844  111732 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.136047  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.136095  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.136746  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.137258  111732 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0910 09:48:12.137366  111732 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0910 09:48:12.138442  111732 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.138749  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.138850  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.139396  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.139817  111732 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0910 09:48:12.139874  111732 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0910 09:48:12.139893  111732 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.140448  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.140494  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.141197  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.141898  111732 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0910 09:48:12.141978  111732 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0910 09:48:12.141979  111732 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.142398  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.142441  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.143064  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.143469  111732 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0910 09:48:12.143575  111732 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0910 09:48:12.143720  111732 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.143877  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.143974  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.144973  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.144977  111732 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0910 09:48:12.145010  111732 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0910 09:48:12.145675  111732 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.146602  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.146720  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.146845  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.147985  111732 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0910 09:48:12.148020  111732 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0910 09:48:12.148031  111732 master.go:461] Enabling API group "storage.k8s.io".
I0910 09:48:12.148269  111732 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.148420  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.148444  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.149220  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.149444  111732 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0910 09:48:12.149479  111732 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0910 09:48:12.149628  111732 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.149748  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.149822  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.150956  111732 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0910 09:48:12.151060  111732 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0910 09:48:12.151383  111732 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.151883  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.151919  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.152396  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.152941  111732 watch_cache.go:405] Replace watchCache (rev: 58008) 
I0910 09:48:12.152949  111732 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0910 09:48:12.153067  111732 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0910 09:48:12.153378  111732 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.153645  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.153678  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.153922  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.154854  111732 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0910 09:48:12.154944  111732 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0910 09:48:12.155243  111732 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.155454  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.155490  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.156074  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.156659  111732 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0910 09:48:12.156687  111732 master.go:461] Enabling API group "apps".
I0910 09:48:12.156760  111732 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.156839  111732 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0910 09:48:12.157034  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.157069  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.158347  111732 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0910 09:48:12.158505  111732 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.158390  111732 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0910 09:48:12.158355  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.158897  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.159000  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.159918  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.161218  111732 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0910 09:48:12.161025  111732 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0910 09:48:12.162060  111732 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.162446  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.163085  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.163353  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.164991  111732 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0910 09:48:12.165126  111732 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0910 09:48:12.166252  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.167312  111732 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.167757  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.167794  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.169037  111732 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0910 09:48:12.169073  111732 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0910 09:48:12.169153  111732 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0910 09:48:12.169127  111732 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.170968  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:12.171001  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:12.171156  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.172141  111732 store.go:1342] Monitoring events count at <storage-prefix>//events
I0910 09:48:12.172294  111732 master.go:461] Enabling API group "events.k8s.io".
I0910 09:48:12.172296  111732 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0910 09:48:12.172620  111732 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.172850  111732 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.173195  111732 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.173481  111732 watch_cache.go:405] Replace watchCache (rev: 58009) 
I0910 09:48:12.173472  111732 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.173790  111732 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.173964  111732 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.174310  111732 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.174538  111732 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.174686  111732 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.174845  111732 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.175650  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.175918  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.176952  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.177309  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.177967  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.178301  111732 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.179015  111732 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.179365  111732 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.180080  111732 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.180450  111732 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0910 09:48:12.180509  111732 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0910 09:48:12.181232  111732 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.181381  111732 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.181749  111732 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.182562  111732 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.183578  111732 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.184414  111732 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.184738  111732 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.185461  111732 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.186061  111732 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.186343  111732 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.186906  111732 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0910 09:48:12.186970  111732 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0910 09:48:12.187723  111732 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.187976  111732 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.188521  111732 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.189121  111732 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.189613  111732 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.190210  111732 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.190797  111732 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.191337  111732 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.191814  111732 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.192471  111732 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.193214  111732 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0910 09:48:12.193280  111732 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0910 09:48:12.193819  111732 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.194350  111732 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0910 09:48:12.194413  111732 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0910 09:48:12.194917  111732 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.195435  111732 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.195679  111732 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.196290  111732 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.196766  111732 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.197329  111732 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.197880  111732 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0910 09:48:12.197956  111732 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0910 09:48:12.198760  111732 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.199437  111732 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.199774  111732 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.200443  111732 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.200685  111732 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.200928  111732 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.201658  111732 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.202002  111732 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.202447  111732 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.203182  111732 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.203471  111732 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.203767  111732 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0910 09:48:12.203845  111732 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0910 09:48:12.203857  111732 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0910 09:48:12.204713  111732 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.205349  111732 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.205939  111732 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.206581  111732 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.207372  111732 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"6d913918-4c97-43a7-aba3-b1e9b757cc58", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0910 09:48:12.211110  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.211146  111732 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0910 09:48:12.211269  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.211285  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.211292  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.211298  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.211383  111732 httplog.go:90] GET /healthz: (438.245µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.212929  111732 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.050669ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:12.216263  111732 httplog.go:90] GET /api/v1/services: (1.522948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:12.221115  111732 httplog.go:90] GET /api/v1/services: (1.339868ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:12.223894  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.223927  111732 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0910 09:48:12.223936  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.223945  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.223952  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.223960  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.223986  111732 httplog.go:90] GET /healthz: (196.891µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:12.227002  111732 httplog.go:90] GET /api/v1/services: (1.631205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:12.227278  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.486905ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.227461  111732 httplog.go:90] GET /api/v1/services: (1.655345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44510]
I0910 09:48:12.231036  111732 httplog.go:90] POST /api/v1/namespaces: (3.010554ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.233737  111732 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.168634ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.236932  111732 httplog.go:90] POST /api/v1/namespaces: (2.541294ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.239554  111732 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (2.056612ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.242889  111732 httplog.go:90] POST /api/v1/namespaces: (2.402484ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.312748  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.312790  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.312805  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.312816  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.312826  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.313039  111732 httplog.go:90] GET /healthz: (574.102µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.325099  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.325206  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.325221  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.325231  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.325240  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.325296  111732 httplog.go:90] GET /healthz: (452.798µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.412292  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.412329  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.412339  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.412345  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.412353  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.412389  111732 httplog.go:90] GET /healthz: (280.314µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.425015  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.425055  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.425065  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.425072  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.425078  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.425129  111732 httplog.go:90] GET /healthz: (290.383µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.512602  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.512657  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.512669  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.512676  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.512683  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.512712  111732 httplog.go:90] GET /healthz: (275.742µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.524953  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.524997  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.525010  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.525021  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.525031  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.525074  111732 httplog.go:90] GET /healthz: (335.287µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.612708  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.612756  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.612770  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.612781  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.612884  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.612978  111732 httplog.go:90] GET /healthz: (483.94µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.625060  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.625118  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.625132  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.625143  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.625151  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.625265  111732 httplog.go:90] GET /healthz: (416.299µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.712730  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.712770  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.712786  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.712797  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.712808  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.712888  111732 httplog.go:90] GET /healthz: (438.541µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.724936  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.724978  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.724991  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.725001  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.725008  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.725033  111732 httplog.go:90] GET /healthz: (261.477µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.812511  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.812604  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.812616  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.812624  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.812632  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.812672  111732 httplog.go:90] GET /healthz: (360.047µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.825214  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.825285  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.825297  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.825306  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.825319  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.825360  111732 httplog.go:90] GET /healthz: (371.244µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:12.913239  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.913282  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.913298  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.913308  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.913318  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.913422  111732 httplog.go:90] GET /healthz: (945.377µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:12.925075  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:12.925231  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:12.925248  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:12.925259  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:12.925267  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:12.925343  111732 httplog.go:90] GET /healthz: (447.594µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.012505  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:13.012544  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.012555  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:13.012596  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:13.012615  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:13.012663  111732 httplog.go:90] GET /healthz: (325.626µs) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.025277  111732 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0910 09:48:13.025353  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.025366  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:13.025376  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:13.025384  111732 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:13.025446  111732 httplog.go:90] GET /healthz: (395.712µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.035972  111732 client.go:361] parsed scheme: "endpoint"
I0910 09:48:13.036106  111732 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0910 09:48:13.114006  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.114042  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:13.114051  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:13.114059  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:13.114133  111732 httplog.go:90] GET /healthz: (1.622784ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.126518  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.126558  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:13.126569  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:13.126576  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:13.126646  111732 httplog.go:90] GET /healthz: (1.815006ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.212572  111732 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.640009ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.212801  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.911084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.212859  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.236722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44514]
I0910 09:48:13.213677  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.213710  111732 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0910 09:48:13.213722  111732 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0910 09:48:13.213731  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0910 09:48:13.213807  111732 httplog.go:90] GET /healthz: (1.065501ms) 0 [Go-http-client/1.1 127.0.0.1:44516]
I0910 09:48:13.214908  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.286711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.215047  111732 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.368351ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.215861  111732 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.638763ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I0910 09:48:13.216086  111732 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0910 09:48:13.217529  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.828381ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.217550  111732 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.150042ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44516]
I0910 09:48:13.217664  111732 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.816357ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.219345  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.068556ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.220376  111732 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.918585ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.220539  111732 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0910 09:48:13.220557  111732 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0910 09:48:13.220582  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (884.688µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.222122  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (858.194µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.223562  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (996.287µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.224973  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (942.783µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.225638  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.225664  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.225697  111732 httplog.go:90] GET /healthz: (1.052519ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.226415  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (917.982µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.227922  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.069806ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.231228  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.757053ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.231762  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0910 09:48:13.233490  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.343723ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.236236  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.122383ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.236571  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0910 09:48:13.238464  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.59888ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.241653  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.387784ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.241873  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0910 09:48:13.243635  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.386197ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.246036  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905747ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.246353  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0910 09:48:13.247734  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.086276ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.250266  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.131302ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.250854  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0910 09:48:13.252273  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.061988ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.254585  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.818942ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.254924  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0910 09:48:13.256214  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.002786ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.259057  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.237478ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.259366  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0910 09:48:13.260817  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.179299ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.263598  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.193363ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.264004  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0910 09:48:13.265610  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.285597ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.268942  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.667732ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.269531  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0910 09:48:13.271284  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.494522ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.274703  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.964609ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.275394  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0910 09:48:13.277447  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.729336ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.280499  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.358707ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.280759  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0910 09:48:13.282338  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.285253ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.285520  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.276202ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.286066  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0910 09:48:13.287823  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.137261ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.290641  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.193373ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.290923  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0910 09:48:13.292385  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.230135ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.295232  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.280935ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.295577  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0910 09:48:13.297231  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.36839ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.300076  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.228948ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.300473  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0910 09:48:13.302129  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.346545ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.304667  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.962102ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.305140  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0910 09:48:13.306768  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.241979ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.309709  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.203903ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.309986  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0910 09:48:13.313512  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.313550  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.313604  111732 httplog.go:90] GET /healthz: (1.540773ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.313782  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (2.333791ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.316279  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.921231ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.316639  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0910 09:48:13.318029  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.121256ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.320682  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.13641ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.321001  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0910 09:48:13.322608  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.317203ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.325385  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.112487ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.325673  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.325707  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.325753  111732 httplog.go:90] GET /healthz: (1.052156ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.325786  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0910 09:48:13.326951  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (966.628µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.329707  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.172288ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.330044  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0910 09:48:13.343561  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (13.146938ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.351759  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.032784ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.352503  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0910 09:48:13.367782  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (14.87144ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.372472  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.000394ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.372939  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0910 09:48:13.374891  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.580188ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.378235  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.560335ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.378719  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0910 09:48:13.380685  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.569247ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.384606  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.042327ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.385143  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0910 09:48:13.387441  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.830595ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.393464  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.110151ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.394277  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0910 09:48:13.396943  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (2.053686ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.401709  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.835306ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.402233  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0910 09:48:13.404206  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.680726ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.407813  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.911216ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.408231  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0910 09:48:13.413126  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (4.507581ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.414454  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.414623  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.414834  111732 httplog.go:90] GET /healthz: (1.762497ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.415898  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.782203ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.416193  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0910 09:48:13.417479  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.034404ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.420248  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.132566ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.420528  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0910 09:48:13.422252  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.473505ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.425082  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.263339ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.425374  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0910 09:48:13.426739  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.426775  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.426746  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.141659ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.426824  111732 httplog.go:90] GET /healthz: (1.698034ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.429563  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.189314ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.430065  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0910 09:48:13.431898  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.427523ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.435446  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.771308ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.435806  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0910 09:48:13.437591  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.446359ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.440459  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.304753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.440791  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0910 09:48:13.442391  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.354107ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.445363  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.317807ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.445819  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0910 09:48:13.447460  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.394191ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.450005  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.906692ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.450265  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0910 09:48:13.453236  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (2.590445ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.456445  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.41208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.456697  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0910 09:48:13.458184  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.239498ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.460594  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.776422ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.460962  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0910 09:48:13.462680  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.337942ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.465757  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.215589ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.465977  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0910 09:48:13.467429  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.16656ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.469755  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.862164ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.470025  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0910 09:48:13.471537  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.161383ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.475412  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.817148ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.475769  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0910 09:48:13.477925  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.773564ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.480841  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.200382ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.481178  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0910 09:48:13.482696  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.260347ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.491437  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.148922ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.491818  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0910 09:48:13.493498  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.396023ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.496015  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.562712ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.496288  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0910 09:48:13.497466  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (985.763µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.499869  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.840127ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.500193  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0910 09:48:13.501566  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.069598ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.504121  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905914ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.504523  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0910 09:48:13.506109  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.254212ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.508942  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.070293ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.509230  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0910 09:48:13.510949  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.455056ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.513571  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.144515ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.513815  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.513858  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.513898  111732 httplog.go:90] GET /healthz: (1.791831ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.513817  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0910 09:48:13.515994  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.854232ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.518845  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.184866ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.519652  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0910 09:48:13.521476  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.462262ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.524134  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.084607ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.524528  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0910 09:48:13.525818  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.525851  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.525898  111732 httplog.go:90] GET /healthz: (1.187595ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.526512  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.391339ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.529415  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.356653ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.529722  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0910 09:48:13.531455  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.458243ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.534413  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.429694ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.534758  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0910 09:48:13.536643  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.562177ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.539081  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.860738ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.539344  111732 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0910 09:48:13.541175  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.552907ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.556579  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.408257ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.557008  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0910 09:48:13.573608  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.072115ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.593861  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.377417ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.594410  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0910 09:48:13.612899  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.499091ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.613231  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.613265  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.613298  111732 httplog.go:90] GET /healthz: (1.237948ms) 0 [Go-http-client/1.1 127.0.0.1:44508]
I0910 09:48:13.626086  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.626136  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.626205  111732 httplog.go:90] GET /healthz: (1.438583ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.633944  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.465044ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.634412  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0910 09:48:13.653832  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.855041ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.674449  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.676192ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.676043  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0910 09:48:13.694452  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.564331ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.714332  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.698519ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.714538  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.714559  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.714603  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0910 09:48:13.714627  111732 httplog.go:90] GET /healthz: (2.566839ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.727690  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.727775  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.727875  111732 httplog.go:90] GET /healthz: (2.930506ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.733939  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.382375ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.754488  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.956428ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.755199  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0910 09:48:13.773549  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.067201ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.796363  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.711431ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.796733  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0910 09:48:13.813499  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.944046ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:13.813702  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.814000  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.814312  111732 httplog.go:90] GET /healthz: (2.181852ms) 0 [Go-http-client/1.1 127.0.0.1:44508]
I0910 09:48:13.826396  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.826580  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.826726  111732 httplog.go:90] GET /healthz: (1.943002ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.835186  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.229957ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.835893  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0910 09:48:13.853800  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.127023ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.874647  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.988247ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.875439  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0910 09:48:13.893336  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.789538ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.914081  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.914143  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.914250  111732 httplog.go:90] GET /healthz: (2.175037ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:13.914935  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.360948ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.915477  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0910 09:48:13.926998  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:13.927044  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:13.927197  111732 httplog.go:90] GET /healthz: (2.246749ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.933640  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.104223ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.954422  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.775557ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.954952  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0910 09:48:13.973356  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.688539ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.995427  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.884552ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:13.995918  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0910 09:48:14.013129  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.013257  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.013308  111732 httplog.go:90] GET /healthz: (1.238368ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.013456  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.946547ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.027049  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.027104  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.027281  111732 httplog.go:90] GET /healthz: (2.2648ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.035399  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.514917ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.035892  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0910 09:48:14.053490  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.895972ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.074846  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.2358ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.075417  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0910 09:48:14.093772  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.287468ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.114910  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.447461ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.115247  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.115281  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.115282  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0910 09:48:14.115319  111732 httplog.go:90] GET /healthz: (3.074233ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.126533  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.126570  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.126745  111732 httplog.go:90] GET /healthz: (1.768026ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.133261  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.826227ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.154489  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.986585ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.154821  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0910 09:48:14.173370  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.744861ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.194778  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.224571ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.195221  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0910 09:48:14.213249  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.646167ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.213634  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.213821  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.214008  111732 httplog.go:90] GET /healthz: (1.932084ms) 0 [Go-http-client/1.1 127.0.0.1:44508]
I0910 09:48:14.226426  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.226471  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.226563  111732 httplog.go:90] GET /healthz: (1.790222ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.234352  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.7848ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.234644  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0910 09:48:14.253095  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.654151ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.275306  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.708872ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.275685  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0910 09:48:14.293816  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.142828ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.313756  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.313792  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.313836  111732 httplog.go:90] GET /healthz: (1.733396ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.314933  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.336055ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.315283  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0910 09:48:14.326500  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.326578  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.326680  111732 httplog.go:90] GET /healthz: (1.824936ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.333454  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.880122ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.354815  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.188453ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.355217  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0910 09:48:14.374135  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (2.548623ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.394770  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.995098ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.395125  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0910 09:48:14.414317  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.414370  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.414428  111732 httplog.go:90] GET /healthz: (1.852895ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.414571  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.802621ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.427444  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.427495  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.427578  111732 httplog.go:90] GET /healthz: (2.450315ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.436680  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.680194ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.437272  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0910 09:48:14.454114  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.413688ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.475323  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.537174ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.475631  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0910 09:48:14.493411  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.838098ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.514264  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.514308  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.514374  111732 httplog.go:90] GET /healthz: (2.324533ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.514569  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.03049ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.514810  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0910 09:48:14.527480  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.527692  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.527989  111732 httplog.go:90] GET /healthz: (3.068465ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.533582  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.944232ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.554743  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.125066ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.555151  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0910 09:48:14.573363  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.69658ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.594430  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.871392ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.595205  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0910 09:48:14.613057  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.613305  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.613132  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.681477ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.613404  111732 httplog.go:90] GET /healthz: (1.394366ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.626234  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.626280  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.626400  111732 httplog.go:90] GET /healthz: (1.679488ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.634570  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.869211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.635129  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0910 09:48:14.653427  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.727031ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.674457  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.688366ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.674774  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0910 09:48:14.693178  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.748643ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.713534  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.713579  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.713642  111732 httplog.go:90] GET /healthz: (1.603162ms) 0 [Go-http-client/1.1 127.0.0.1:44508]
I0910 09:48:14.714450  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.7283ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.714725  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0910 09:48:14.726406  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.726453  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.726597  111732 httplog.go:90] GET /healthz: (1.630031ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.733436  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.8056ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.755020  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.298155ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.755458  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0910 09:48:14.773577  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.878363ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.794554  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.933542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.794841  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0910 09:48:14.813470  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.867656ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:14.814086  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.814119  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.814194  111732 httplog.go:90] GET /healthz: (2.103697ms) 0 [Go-http-client/1.1 127.0.0.1:44508]
I0910 09:48:14.826834  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.826886  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.826972  111732 httplog.go:90] GET /healthz: (2.122273ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.834495  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.881334ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.834972  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0910 09:48:14.852969  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.494231ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.876721  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.128681ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.877059  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0910 09:48:14.893365  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.783626ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.914348  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.914386  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.914436  111732 httplog.go:90] GET /healthz: (2.001165ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:14.915471  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.979409ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.915859  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0910 09:48:14.926422  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:14.926474  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:14.926537  111732 httplog.go:90] GET /healthz: (1.556594ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.933677  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.953114ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.954520  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.904302ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.954931  111732 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0910 09:48:14.972939  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.473146ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.975190  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.685593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.993924  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.535756ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:14.995040  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0910 09:48:15.013287  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.786403ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.013464  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.013552  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.013637  111732 httplog.go:90] GET /healthz: (1.615323ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:15.015884  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.006224ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.026128  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.026218  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.026363  111732 httplog.go:90] GET /healthz: (1.534694ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.034440  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.968252ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.034788  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0910 09:48:15.056522  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (4.635596ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.059750  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.486925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.074938  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.436316ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.075288  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0910 09:48:15.093001  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.448646ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.095419  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.684529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.113855  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.355647ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.113877  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.113899  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.113933  111732 httplog.go:90] GET /healthz: (1.778246ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:15.114328  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0910 09:48:15.126274  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.126326  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.126416  111732 httplog.go:90] GET /healthz: (1.552373ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.133677  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.29908ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.139810  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.933784ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.154049  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.54218ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.154523  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0910 09:48:15.173084  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.684067ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.175688  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.070348ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.194924  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.223903ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.195692  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0910 09:48:15.213322  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.939292ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.213335  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.213570  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.213615  111732 httplog.go:90] GET /healthz: (1.530838ms) 0 [Go-http-client/1.1 127.0.0.1:44508]
I0910 09:48:15.215611  111732 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.682926ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.226545  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.226612  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.226669  111732 httplog.go:90] GET /healthz: (1.822727ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.234865  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.413941ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.235205  111732 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0910 09:48:15.253136  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.694923ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.256105  111732 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.804165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.274581  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.071175ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.274884  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0910 09:48:15.293193  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.701231ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.295614  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.572064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.313746  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.313969  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.314061  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.465161ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.314297  111732 httplog.go:90] GET /healthz: (2.162755ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:15.314539  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0910 09:48:15.326637  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.326680  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.326762  111732 httplog.go:90] GET /healthz: (1.733003ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.333659  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.016741ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.336589  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.116392ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.353585  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.133978ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.353963  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0910 09:48:15.373710  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (2.124409ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.376665  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.113875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.393784  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.35795ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.394197  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0910 09:48:15.412630  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.189491ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.414011  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.414042  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.414093  111732 httplog.go:90] GET /healthz: (2.044525ms) 0 [Go-http-client/1.1 127.0.0.1:44506]
I0910 09:48:15.414574  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.484981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.426505  111732 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0910 09:48:15.426728  111732 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0910 09:48:15.426920  111732 httplog.go:90] GET /healthz: (2.032043ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.433783  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.17862ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.434057  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0910 09:48:15.452702  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.277808ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.454920  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.509074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.473592  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.988795ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.473890  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0910 09:48:15.492963  111732 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.525962ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.495429  111732 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.775765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.514039  111732 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.485781ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.514423  111732 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0910 09:48:15.514931  111732 httplog.go:90] GET /healthz: (1.869193ms) 200 [Go-http-client/1.1 127.0.0.1:44506]
W0910 09:48:15.515829  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.515861  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.515898  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.515913  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.515925  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.516035  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.516063  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.516083  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.516097  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.516241  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:15.516266  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0910 09:48:15.516299  111732 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0910 09:48:15.516381  111732 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0910 09:48:15.517041  111732 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517078  111732 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517098  111732 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517118  111732 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517124  111732 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517141  111732 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517234  111732 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517261  111732 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517250  111732 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517286  111732 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517261  111732 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517456  111732 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517471  111732 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517584  111732 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517615  111732 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517460  111732 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517946  111732 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517973  111732 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.517988  111732 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.518011  111732 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.518572  111732 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.518645  111732 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0910 09:48:15.520937  111732 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (974.687µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44652]
I0910 09:48:15.521051  111732 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (623.656µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.521303  111732 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (568.326µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44660]
I0910 09:48:15.521605  111732 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (513.817µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44662]
I0910 09:48:15.521829  111732 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (369.516µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44650]
I0910 09:48:15.522097  111732 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (367.617µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44664]
I0910 09:48:15.522332  111732 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (393.451µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44656]
I0910 09:48:15.522360  111732 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (553.04µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:15.522697  111732 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (381.185µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44666]
I0910 09:48:15.522813  111732 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=58009 labels= fields= timeout=7m16s
I0910 09:48:15.522905  111732 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (447.349µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44658]
I0910 09:48:15.523082  111732 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=58006 labels= fields= timeout=8m35s
I0910 09:48:15.523239  111732 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=58007 labels= fields= timeout=7m22s
I0910 09:48:15.523516  111732 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (3.446804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:15.523803  111732 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=58006 labels= fields= timeout=7m23s
I0910 09:48:15.523843  111732 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=58008 labels= fields= timeout=5m1s
I0910 09:48:15.523524  111732 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=58006 labels= fields= timeout=5m37s
I0910 09:48:15.523815  111732 get.go:250] Starting watch for /api/v1/services, rv=58006 labels= fields= timeout=6m42s
I0910 09:48:15.523845  111732 get.go:250] Starting watch for /api/v1/nodes, rv=58006 labels= fields= timeout=9m52s
I0910 09:48:15.524113  111732 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=58009 labels= fields= timeout=9m49s
I0910 09:48:15.524090  111732 get.go:250] Starting watch for /api/v1/pods, rv=58006 labels= fields= timeout=7m48s
I0910 09:48:15.524427  111732 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58008 labels= fields= timeout=7m44s
I0910 09:48:15.525874  111732 httplog.go:90] GET /healthz: (1.126052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.527600  111732 httplog.go:90] GET /api/v1/namespaces/default: (1.20483ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.530017  111732 httplog.go:90] POST /api/v1/namespaces: (1.983597ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.532446  111732 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.832043ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.537377  111732 httplog.go:90] POST /api/v1/namespaces/default/services: (4.396494ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.539335  111732 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.571748ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.541864  111732 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.74132ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:15.617069  111732 shared_informer.go:227] caches populated
I0910 09:48:15.717350  111732 shared_informer.go:227] caches populated
I0910 09:48:15.817582  111732 shared_informer.go:227] caches populated
I0910 09:48:15.917854  111732 shared_informer.go:227] caches populated
I0910 09:48:16.021950  111732 shared_informer.go:227] caches populated
I0910 09:48:16.122245  111732 shared_informer.go:227] caches populated
I0910 09:48:16.222583  111732 shared_informer.go:227] caches populated
I0910 09:48:16.322890  111732 shared_informer.go:227] caches populated
I0910 09:48:16.423329  111732 shared_informer.go:227] caches populated
I0910 09:48:16.523642  111732 shared_informer.go:227] caches populated
I0910 09:48:16.623981  111732 shared_informer.go:227] caches populated
I0910 09:48:16.724279  111732 shared_informer.go:227] caches populated
I0910 09:48:16.724560  111732 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0910 09:48:16.724588  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:16.724650  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:16.724674  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:16.724689  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0910 09:48:16.724703  111732 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0910 09:48:16.724934  111732 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.724961  111732 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725034  111732 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725054  111732 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725082  111732 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725098  111732 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725243  111732 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725259  111732 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.724945  111732 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725272  111732 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0910 09:48:16.725316  111732 pv_controller_base.go:282] Starting persistent volume controller
I0910 09:48:16.725345  111732 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0910 09:48:16.726367  111732 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (693.877µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44674]
I0910 09:48:16.726368  111732 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (612.584µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44678]
I0910 09:48:16.726405  111732 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (719.277µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44676]
I0910 09:48:16.726367  111732 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (975.145µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44668]
I0910 09:48:16.726367  111732 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (955.528µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44672]
I0910 09:48:16.727211  111732 get.go:250] Starting watch for /api/v1/nodes, rv=58006 labels= fields= timeout=5m54s
I0910 09:48:16.727293  111732 get.go:250] Starting watch for /api/v1/pods, rv=58006 labels= fields= timeout=6m37s
I0910 09:48:16.727402  111732 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=58006 labels= fields= timeout=9m41s
I0910 09:48:16.727535  111732 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=58006 labels= fields= timeout=7m7s
I0910 09:48:16.727577  111732 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=58008 labels= fields= timeout=5m8s
I0910 09:48:16.824911  111732 shared_informer.go:227] caches populated
I0910 09:48:16.825551  111732 shared_informer.go:227] caches populated
I0910 09:48:16.825575  111732 shared_informer.go:204] Caches are synced for persistent volume 
I0910 09:48:16.825602  111732 pv_controller_base.go:158] controller initialized
I0910 09:48:16.825667  111732 pv_controller_base.go:419] resyncing PV controller
I0910 09:48:16.925411  111732 shared_informer.go:227] caches populated
I0910 09:48:17.025744  111732 shared_informer.go:227] caches populated
I0910 09:48:17.126046  111732 shared_informer.go:227] caches populated
I0910 09:48:17.226350  111732 shared_informer.go:227] caches populated
I0910 09:48:17.229667  111732 httplog.go:90] POST /api/v1/nodes: (2.625538ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.230415  111732 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0910 09:48:17.232539  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.246608ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.235477  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.30863ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.236045  111732 volume_binding_test.go:751] Running test wait one bound, one provisioned
I0910 09:48:17.238301  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.854322ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.241832  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.972713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.244903  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.405439ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.248753  111732 httplog.go:90] POST /api/v1/persistentvolumes: (3.144383ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.249598  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind", version 58284
I0910 09:48:17.249682  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0910 09:48:17.249706  111732 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0910 09:48:17.249716  111732 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0910 09:48:17.253047  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (3.199844ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.254550  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind", version 58285
I0910 09:48:17.254599  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.254667  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: no volume found
I0910 09:48:17.254704  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind] status: set phase Pending
I0910 09:48:17.254722  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind] status: phase Pending already set
I0910 09:48:17.254758  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-w-canbind", UID:"bf7e346e-f121-4fb8-86c9-25eaf4d6f648", APIVersion:"v1", ResourceVersion:"58285", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:17.256431  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (2.686753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.256644  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58286
I0910 09:48:17.256680  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.256715  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:17.256741  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Pending
I0910 09:48:17.256759  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Pending already set
I0910 09:48:17.256781  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"fb909efc-4071-4a15-9220-578d60a1afb3", APIVersion:"v1", ResourceVersion:"58286", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:17.258824  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.196417ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.259693  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (2.474388ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.261354  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision
I0910 09:48:17.261377  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision
I0910 09:48:17.261781  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" on node "node-1"
I0910 09:48:17.261840  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" on node "node-1"
I0910 09:48:17.261869  111732 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision" that has no matching volumes on node "node-1" ...
I0910 09:48:17.261975  111732 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision", node "node-1"
I0910 09:48:17.262017  111732 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind", version 58285
I0910 09:48:17.262047  111732 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58286
I0910 09:48:17.262214  111732 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision", node "node-1"
I0910 09:48:17.262508  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (12.032965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:17.262814  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 58289
I0910 09:48:17.262841  111732 pv_controller.go:798] volume "pv-w-canbind" entered phase "Available"
I0910 09:48:17.263023  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 58289
I0910 09:48:17.263046  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0910 09:48:17.263064  111732 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0910 09:48:17.263070  111732 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0910 09:48:17.263077  111732 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0910 09:48:17.264715  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.790013ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.267837  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58291
I0910 09:48:17.268020  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.268141  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: no volume found
I0910 09:48:17.268285  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: started
I0910 09:48:17.268391  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind[bf7e346e-f121-4fb8-86c9-25eaf4d6f648]]
I0910 09:48:17.268146  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-canbind: (4.438526ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44738]
I0910 09:48:17.268602  111732 pv_controller.go:1372] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind] started, class: "wait-h7pb"
I0910 09:48:17.273698  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-canbind: (3.266527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.273954  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58292
I0910 09:48:17.274073  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.274083  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58292
I0910 09:48:17.274202  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (4.599255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:17.274130  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: no volume found
I0910 09:48:17.274589  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: started
I0910 09:48:17.274752  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind[bf7e346e-f121-4fb8-86c9-25eaf4d6f648]]
I0910 09:48:17.274912  111732 pv_controller.go:1642] operation "provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind[bf7e346e-f121-4fb8-86c9-25eaf4d6f648]" is already running, skipping
I0910 09:48:17.275077  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58293
I0910 09:48:17.275383  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.275528  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:17.275541  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:17.275559  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[fb909efc-4071-4a15-9220-578d60a1afb3]]
I0910 09:48:17.275615  111732 pv_controller.go:1372] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] started, class: "wait-h7pb"
I0910 09:48:17.275789  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648: (1.320118ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.276309  111732 pv_controller.go:1476] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" created
I0910 09:48:17.276345  111732 pv_controller.go:1493] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: trying to save volume pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648
I0910 09:48:17.278668  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (2.578717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:17.279128  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58294
I0910 09:48:17.279325  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58294
I0910 09:48:17.279369  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.279403  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:17.279414  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:17.279432  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[fb909efc-4071-4a15-9220-578d60a1afb3]]
I0910 09:48:17.279441  111732 pv_controller.go:1642] operation "provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[fb909efc-4071-4a15-9220-578d60a1afb3]" is already running, skipping
I0910 09:48:17.279589  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648", version 58295
I0910 09:48:17.279631  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:17.279646  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind
I0910 09:48:17.279667  111732 httplog.go:90] POST /api/v1/persistentvolumes: (3.016413ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.279683  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.279703  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:17.279735  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58292
I0910 09:48:17.279752  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.279780  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:17.279796  111732 pv_controller.go:931] binding volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.279811  111732 pv_controller.go:829] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.279828  111732 pv_controller.go:841] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.279838  111732 pv_controller.go:777] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: set phase Bound
I0910 09:48:17.279899  111732 pv_controller.go:1501] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" saved
I0910 09:48:17.281093  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-fb909efc-4071-4a15-9220-578d60a1afb3: (1.706766ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:17.281303  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" with version 58295
I0910 09:48:17.281345  111732 pv_controller.go:1554] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" provisioned for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.281392  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-w-canbind", UID:"bf7e346e-f121-4fb8-86c9-25eaf4d6f648", APIVersion:"v1", ResourceVersion:"58292", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648 using kubernetes.io/mock-provisioner
I0910 09:48:17.281459  111732 pv_controller.go:1476] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" created
I0910 09:48:17.281489  111732 pv_controller.go:1493] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: trying to save volume pvc-fb909efc-4071-4a15-9220-578d60a1afb3
I0910 09:48:17.282776  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648/status: (2.360133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.284253  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" with version 58296
I0910 09:48:17.284291  111732 pv_controller.go:798] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" entered phase "Bound"
I0910 09:48:17.284310  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: binding to "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648"
I0910 09:48:17.284334  111732 pv_controller.go:901] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.284405  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.808275ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:17.284611  111732 httplog.go:90] POST /api/v1/persistentvolumes: (2.52489ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.284893  111732 pv_controller.go:1501] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" saved
I0910 09:48:17.284926  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3", version 58298
I0910 09:48:17.284952  111732 pv_controller.go:1554] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" provisioned for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.285090  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"fb909efc-4071-4a15-9220-578d60a1afb3", APIVersion:"v1", ResourceVersion:"58294", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fb909efc-4071-4a15-9220-578d60a1afb3 using kubernetes.io/mock-provisioner
I0910 09:48:17.285234  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" with version 58296
I0910 09:48:17.285270  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:17.285286  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind
I0910 09:48:17.285308  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.285328  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:17.285590  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" with version 58298
I0910 09:48:17.285630  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:17.285643  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:17.285661  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.285676  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:17.287743  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-canbind: (3.106074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.287987  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58299
I0910 09:48:17.288017  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: bound to "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648"
I0910 09:48:17.288029  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind] status: set phase Bound
I0910 09:48:17.290325  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (5.070332ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.291675  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-canbind/status: (3.342935ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44762]
I0910 09:48:17.292071  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58301
I0910 09:48:17.292108  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" entered phase "Bound"
I0910 09:48:17.292131  111732 pv_controller.go:957] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.292237  111732 pv_controller.go:958] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:17.292264  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648", bindCompleted: true, boundByController: true
I0910 09:48:17.292328  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58294
I0910 09:48:17.292351  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.292409  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:17.292430  111732 pv_controller.go:931] binding volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.292446  111732 pv_controller.go:829] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.292471  111732 pv_controller.go:841] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.292486  111732 pv_controller.go:777] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: set phase Bound
I0910 09:48:17.295599  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-fb909efc-4071-4a15-9220-578d60a1afb3/status: (2.690657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.295884  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" with version 58302
I0910 09:48:17.295922  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:17.295924  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" with version 58302
I0910 09:48:17.295934  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:17.295954  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:17.295956  111732 pv_controller.go:798] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" entered phase "Bound"
I0910 09:48:17.295967  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:17.295974  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-fb909efc-4071-4a15-9220-578d60a1afb3"
I0910 09:48:17.295996  111732 pv_controller.go:901] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.299184  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (2.775084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.299529  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58303
I0910 09:48:17.299578  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: bound to "pvc-fb909efc-4071-4a15-9220-578d60a1afb3"
I0910 09:48:17.299593  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:17.302616  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision/status: (2.642221ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.302949  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58304
I0910 09:48:17.303145  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" entered phase "Bound"
I0910 09:48:17.303857  111732 pv_controller.go:957] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.304020  111732 pv_controller.go:958] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:17.304069  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-fb909efc-4071-4a15-9220-578d60a1afb3", bindCompleted: true, boundByController: true
I0910 09:48:17.304190  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" with version 58301
I0910 09:48:17.304263  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: phase: Bound, bound to: "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648", bindCompleted: true, boundByController: true
I0910 09:48:17.304334  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:17.304421  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: claim is already correctly bound
I0910 09:48:17.304483  111732 pv_controller.go:931] binding volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.304566  111732 pv_controller.go:829] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.304639  111732 pv_controller.go:841] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.304710  111732 pv_controller.go:777] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: set phase Bound
I0910 09:48:17.304745  111732 pv_controller.go:780] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: phase Bound already set
I0910 09:48:17.304777  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: binding to "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648"
I0910 09:48:17.304837  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind]: already bound to "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648"
I0910 09:48:17.304885  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind] status: set phase Bound
I0910 09:48:17.304972  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind] status: phase Bound already set
I0910 09:48:17.305054  111732 pv_controller.go:957] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind"
I0910 09:48:17.305144  111732 pv_controller.go:958] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:17.305277  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" status after binding: phase: Bound, bound to: "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648", bindCompleted: true, boundByController: true
I0910 09:48:17.305421  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58304
I0910 09:48:17.305586  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Bound, bound to: "pvc-fb909efc-4071-4a15-9220-578d60a1afb3", bindCompleted: true, boundByController: true
I0910 09:48:17.305905  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:17.305924  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: claim is already correctly bound
I0910 09:48:17.305936  111732 pv_controller.go:931] binding volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.305949  111732 pv_controller.go:829] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.305978  111732 pv_controller.go:841] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.305991  111732 pv_controller.go:777] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: set phase Bound
I0910 09:48:17.306003  111732 pv_controller.go:780] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: phase Bound already set
I0910 09:48:17.306015  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-fb909efc-4071-4a15-9220-578d60a1afb3"
I0910 09:48:17.306043  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: already bound to "pvc-fb909efc-4071-4a15-9220-578d60a1afb3"
I0910 09:48:17.306057  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:17.306085  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Bound already set
I0910 09:48:17.306117  111732 pv_controller.go:957] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:17.306142  111732 pv_controller.go:958] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:17.306194  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-fb909efc-4071-4a15-9220-578d60a1afb3", bindCompleted: true, boundByController: true
I0910 09:48:17.363425  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (1.89815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.464021  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.509471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.516540  111732 cache.go:669] Couldn't expire cache for pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision. Binding is still in progress.
I0910 09:48:17.564311  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.644709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.663978  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.403488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.764347  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.849188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.863878  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.352322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:17.963709  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.226779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.063766  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.207264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.164064  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.467569ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.264427  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.863776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.274749  111732 scheduler_binder.go:545] All PVCs for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision" are bound
I0910 09:48:18.274872  111732 factory.go:610] Attempting to bind pod-pvc-canbind-or-provision to node-1
I0910 09:48:18.278512  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision/binding: (3.012299ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.279124  111732 scheduler.go:667] pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0910 09:48:18.281739  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.229002ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.363636  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canbind-or-provision: (2.056988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.366236  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-canbind: (1.799152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.368836  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (1.917189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.371276  111732 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.808373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.382756  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (10.328432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.388981  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" deleted
I0910 09:48:18.389047  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" with version 58302
I0910 09:48:18.389085  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: fb909efc-4071-4a15-9220-578d60a1afb3)", boundByController: true
I0910 09:48:18.389098  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:18.391305  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (1.828203ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.393806  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:18.393843  111732 pv_controller.go:575] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" is released and reclaim policy "Delete" will be executed
I0910 09:48:18.393860  111732 pv_controller.go:777] updating PersistentVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: set phase Released
I0910 09:48:18.394691  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (11.373611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.394725  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" deleted
I0910 09:48:18.400961  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-fb909efc-4071-4a15-9220-578d60a1afb3/status: (6.673645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.401983  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" with version 58320
I0910 09:48:18.402023  111732 pv_controller.go:798] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" entered phase "Released"
I0910 09:48:18.402040  111732 pv_controller.go:1022] reclaimVolume[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: policy is Delete
I0910 09:48:18.402095  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-fb909efc-4071-4a15-9220-578d60a1afb3[d4937d13-ff21-4428-9dfe-462642c87a75]]
I0910 09:48:18.402238  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" with version 58296
I0910 09:48:18.402338  111732 pv_controller.go:1146] deleteVolumeOperation [pvc-fb909efc-4071-4a15-9220-578d60a1afb3] started
I0910 09:48:18.402393  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind (uid: bf7e346e-f121-4fb8-86c9-25eaf4d6f648)", boundByController: true
I0910 09:48:18.402410  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind
I0910 09:48:18.405236  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-fb909efc-4071-4a15-9220-578d60a1afb3: (1.700494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:18.405606  111732 pv_controller.go:1250] isVolumeReleased[pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: volume is released
I0910 09:48:18.405746  111732 pv_controller.go:1285] doDeleteVolume [pvc-fb909efc-4071-4a15-9220-578d60a1afb3]
I0910 09:48:18.405863  111732 pv_controller.go:1316] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" deleted
I0910 09:48:18.405959  111732 pv_controller.go:1193] deleteVolumeOperation [pvc-fb909efc-4071-4a15-9220-578d60a1afb3]: success
I0910 09:48:18.407070  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-canbind: (3.785437ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.409247  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind not found
I0910 09:48:18.409290  111732 pv_controller.go:575] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" is released and reclaim policy "Delete" will be executed
I0910 09:48:18.409311  111732 pv_controller.go:777] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: set phase Released
I0910 09:48:18.411935  111732 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-fb909efc-4071-4a15-9220-578d60a1afb3: (5.612208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:18.413651  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (18.390653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.415045  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648/status: (4.918781ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.415315  111732 pv_controller.go:790] updating PersistentVolume[pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648": StorageError: invalid object, Code: 4, Key: /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d48d3681-40eb-4fab-af89-c600e4712aef, UID in object meta: 
I0910 09:48:18.415333  111732 pv_controller_base.go:202] could not sync volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648": Operation cannot be fulfilled on persistentvolumes "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648": StorageError: invalid object, Code: 4, Key: /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d48d3681-40eb-4fab-af89-c600e4712aef, UID in object meta: 
I0910 09:48:18.415387  111732 pv_controller_base.go:212] volume "pvc-fb909efc-4071-4a15-9220-578d60a1afb3" deleted
I0910 09:48:18.415422  111732 pv_controller_base.go:212] volume "pv-w-canbind" deleted
I0910 09:48:18.415435  111732 pv_controller_base.go:212] volume "pvc-bf7e346e-f121-4fb8-86c9-25eaf4d6f648" deleted
I0910 09:48:18.415458  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" was already processed
I0910 09:48:18.415471  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind" was already processed
I0910 09:48:18.436746  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (22.457003ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44780]
I0910 09:48:18.437009  111732 volume_binding_test.go:751] Running test one immediate pv prebound, one wait provisioned
I0910 09:48:18.441613  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.827052ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.444967  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.491813ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.447746  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.165297ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.450975  111732 httplog.go:90] POST /api/v1/persistentvolumes: (2.373138ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.451086  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-prebound", version 58339
I0910 09:48:18.451147  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: )", boundByController: false
I0910 09:48:18.451238  111732 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound
I0910 09:48:18.451253  111732 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0910 09:48:18.455241  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.302396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.455333  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (3.695557ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:18.455417  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound", version 58341
I0910 09:48:18.455472  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:18.455520  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: )", boundByController: false
I0910 09:48:18.455549  111732 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:18.455568  111732 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:18.455790  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58342
I0910 09:48:18.455799  111732 pv_controller.go:849] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0910 09:48:18.455830  111732 pv_controller.go:798] volume "pv-i-prebound" entered phase "Available"
I0910 09:48:18.456268  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58342
I0910 09:48:18.456334  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: )", boundByController: false
I0910 09:48:18.456345  111732 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound
I0910 09:48:18.456354  111732 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0910 09:48:18.456363  111732 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0910 09:48:18.459016  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.748021ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:18.459287  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (3.049519ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.459646  111732 pv_controller.go:852] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0910 09:48:18.459677  111732 pv_controller.go:934] error binding volume "pv-i-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0910 09:48:18.459700  111732 pv_controller_base.go:246] could not sync claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0910 09:48:18.459753  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58343
I0910 09:48:18.459884  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:18.459940  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:18.459981  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Pending
I0910 09:48:18.460005  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Pending already set
I0910 09:48:18.460280  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"f6f8580e-0927-4f67-9a77-78eff916ab60", APIVersion:"v1", ResourceVersion:"58343", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:18.463715  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (3.663729ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
I0910 09:48:18.464337  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
I0910 09:48:18.464368  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
E0910 09:48:18.464769  111732 factory.go:561] Error scheduling volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0910 09:48:18.464823  111732 factory.go:619] Updating pod condition for volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I0910 09:48:18.465634  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (4.733491ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:18.467898  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.168693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:18.468775  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.521035ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44830]
I0910 09:48:18.469475  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned/status: (4.164212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44754]
E0910 09:48:18.469767  111732 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0910 09:48:18.469883  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
I0910 09:48:18.469897  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
E0910 09:48:18.470190  111732 factory.go:561] Error scheduling volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0910 09:48:18.470240  111732 factory.go:619] Updating pod condition for volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
E0910 09:48:18.470258  111732 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0910 09:48:18.473268  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.526883ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:18.473824  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.227155ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
E0910 09:48:18.473923  111732 factory.go:585] pod is already present in unschedulableQ
I0910 09:48:18.567366  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.373381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:18.667320  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.284744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:18.766818  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (1.952097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:18.867367  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.419071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:18.967803  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.541279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.067575  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.542883ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.167873  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.746861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.268945  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.595045ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.367923  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.794482ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.468051  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.006073ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.567463  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.448167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.668783  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.8072ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.767882  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.862507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.867837  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.76877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:19.967589  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.50484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.067795  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.6961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.167817  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.761584ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.268148  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.111301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.367436  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.342985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.467787  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.79837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.568135  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.783239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.668141  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.142218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.768198  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.85262ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.868338  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.250007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:20.968324  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.191802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.068403  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.093649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.167626  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.495305ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.267503  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.404214ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.367940  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.889587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.467504  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.484619ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.567827  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.73308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.667577  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.569061ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.767303  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.381795ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.867763  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.141118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:21.967240  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.282189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.066924  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.005619ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.167644  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.635081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.267745  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.645436ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.367765  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.809245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.467393  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.447502ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.567750  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.655805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.668320  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.065668ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.767415  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.32725ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.867734  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.487721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:22.967183  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.159018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.067673  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.399128ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.167577  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.426951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.267832  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.698698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.367004  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (1.961147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.467843  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.835423ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.567715  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.393649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.667714  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.398655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.767457  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.111825ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.868105  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.02217ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:23.967605  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.645326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.067748  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.588799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.168366  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.206363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.268308  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.161647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.368587  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.306476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.469363  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.879448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.569774  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (4.061205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.668741  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.311047ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.768789  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.622269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.867182  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.240161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:24.968291  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.189712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.067303  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.202136ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.167680  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.684996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.267094  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.105403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.367291  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.129284ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.467582  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.596307ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.528776  111732 httplog.go:90] GET /api/v1/namespaces/default: (2.080154ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.531457  111732 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.983371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.534435  111732 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.294697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.570734  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.481364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.667601  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.504562ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.768940  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.429012ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.867972  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.676453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:25.967916  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.879241ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.067867  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.825398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.167712  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.711971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.268252  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.121064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.367968  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.910828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.468456  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.315653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.568519  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.023019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.667788  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.652518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.768575  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.246158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.869122  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (4.024416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:26.967749  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.671743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.068491  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.847486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.167935  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.811283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.267938  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.781724ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.368143  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.811211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.467610  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.505943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.568486  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.332379ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.667524  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.227453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.768814  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.463973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.867947  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.853242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:27.968295  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.981615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.067827  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.486903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.168398  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.939961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.267446  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.278094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.367657  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.533722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.467505  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.458055ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.567905  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.818363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.667443  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.43503ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.768677  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.644958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.867841  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.799674ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:28.969018  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.798484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.068096  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.926313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.167337  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.314496ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.266995  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.120752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.367706  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.437819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.467234  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.123942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.567005  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.070964ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.668326  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (1.814922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.766872  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (1.992152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.867211  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.206996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:29.967128  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.164713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.067139  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.084599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.167729  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.756095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.267046  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.018092ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.368144  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.115181ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.467104  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.136001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.567108  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.160662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.667605  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.5062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.767392  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.384831ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.868081  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.716116ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:30.968180  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.890686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.067510  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.583487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.167995  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.680833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.267482  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.546671ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.369314  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (4.084458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.467560  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.479803ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.567592  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.450895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.667734  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.753387ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.767689  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.627275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.825980  111732 pv_controller_base.go:419] resyncing PV controller
I0910 09:48:31.826288  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58342
I0910 09:48:31.826363  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: )", boundByController: false
I0910 09:48:31.826380  111732 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound
I0910 09:48:31.826390  111732 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0910 09:48:31.826438  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" with version 58341
I0910 09:48:31.826491  111732 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0910 09:48:31.826514  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:31.826612  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: )", boundByController: false
I0910 09:48:31.826654  111732 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.826669  111732 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.826719  111732 pv_controller.go:849] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0910 09:48:31.831997  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (4.418167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.832920  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58559
I0910 09:48:31.832390  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
I0910 09:48:31.832979  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
I0910 09:48:31.832978  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: 45f45037-5bc4-48a8-9b02-d013dab2878e)", boundByController: false
I0910 09:48:31.833009  111732 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound
I0910 09:48:31.833025  111732 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:31.833041  111732 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0910 09:48:31.833063  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58559
I0910 09:48:31.833075  111732 pv_controller.go:862] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.833086  111732 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
E0910 09:48:31.833282  111732 factory.go:561] Error scheduling volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned: pod has unbound immediate PersistentVolumeClaims; retrying
I0910 09:48:31.833322  111732 factory.go:619] Updating pod condition for volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned to (PodScheduled==False, Reason=Unschedulable)
I0910 09:48:31.837181  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned/status: (3.112467ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:31.837546  111732 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events/pod-i-pv-prebound-w-provisioned.15c30abc261fb871: (2.726624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:31.837547  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (2.554271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
E0910 09:48:31.837753  111732 scheduler.go:333] Error updating the condition of the pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned: Operation cannot be fulfilled on pods "pod-i-pv-prebound-w-provisioned": the object has been modified; please apply your changes to the latest version and try again
E0910 09:48:31.837779  111732 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0910 09:48:31.837984  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58561
I0910 09:48:31.837991  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58561
I0910 09:48:31.838013  111732 pv_controller.go:798] volume "pv-i-prebound" entered phase "Bound"
I0910 09:48:31.838033  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0910 09:48:31.838035  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: 45f45037-5bc4-48a8-9b02-d013dab2878e)", boundByController: false
I0910 09:48:31.838052  111732 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound
I0910 09:48:31.838057  111732 pv_controller.go:901] volume "pv-i-prebound" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.838072  111732 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:31.838091  111732 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0910 09:48:31.838141  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (4.058462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45182]
I0910 09:48:31.845288  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-i-pv-prebound: (4.410242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44828]
I0910 09:48:31.846396  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" with version 58562
I0910 09:48:31.846444  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0910 09:48:31.846458  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound] status: set phase Bound
I0910 09:48:31.851662  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-i-pv-prebound/status: (4.784805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:31.852007  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" with version 58563
I0910 09:48:31.852069  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" entered phase "Bound"
I0910 09:48:31.852095  111732 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.852132  111732 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: 45f45037-5bc4-48a8-9b02-d013dab2878e)", boundByController: false
I0910 09:48:31.852219  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0910 09:48:31.852277  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58343
I0910 09:48:31.852289  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:31.852322  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:31.852352  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Pending
I0910 09:48:31.852372  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Pending already set
I0910 09:48:31.852386  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" with version 58563
I0910 09:48:31.852397  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0910 09:48:31.852410  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: 45f45037-5bc4-48a8-9b02-d013dab2878e)", boundByController: false
I0910 09:48:31.852420  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: claim is already correctly bound
I0910 09:48:31.852430  111732 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.852439  111732 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.852458  111732 pv_controller.go:841] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.852467  111732 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0910 09:48:31.852474  111732 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0910 09:48:31.852483  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0910 09:48:31.852503  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0910 09:48:31.852510  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound] status: set phase Bound
I0910 09:48:31.852524  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound] status: phase Bound already set
I0910 09:48:31.852533  111732 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound"
I0910 09:48:31.852551  111732 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: 45f45037-5bc4-48a8-9b02-d013dab2878e)", boundByController: false
I0910 09:48:31.852562  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0910 09:48:31.852739  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"f6f8580e-0927-4f67-9a77-78eff916ab60", APIVersion:"v1", ResourceVersion:"58343", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:31.857887  111732 httplog.go:90] PATCH /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events/pvc-canprovision.15c30abc25d5b30f: (4.063546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:31.867867  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.516996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:31.967028  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.060848ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.066964  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.105252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.167172  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.235752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.267894  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.872283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.367304  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.245481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.467234  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.335302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.567828  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.664546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.667388  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.38015ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.766812  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (1.991491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.867327  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.337744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:32.967876  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.800375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.067832  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.777434ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.167132  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.152063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.267395  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.436864ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.367287  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.151552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.467225  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.275765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.519757  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
I0910 09:48:33.519819  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned
I0910 09:48:33.520151  111732 scheduler_binder.go:651] All bound volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned" match with Node "node-1"
I0910 09:48:33.520254  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" on node "node-1"
I0910 09:48:33.520278  111732 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0910 09:48:33.520404  111732 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned", node "node-1"
I0910 09:48:33.520543  111732 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58343
I0910 09:48:33.520638  111732 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned", node "node-1"
I0910 09:48:33.522227  111732 cache.go:669] Couldn't expire cache for pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I0910 09:48:33.524644  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.424589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.524732  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58565
I0910 09:48:33.524765  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:33.524805  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:33.524820  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:33.524845  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[f6f8580e-0927-4f67-9a77-78eff916ab60]]
I0910 09:48:33.524930  111732 pv_controller.go:1372] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] started, class: "wait-t5gh"
I0910 09:48:33.528641  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.143553ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:33.528935  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58566
I0910 09:48:33.528970  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:33.529001  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:33.529010  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:33.529003  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58566
I0910 09:48:33.529025  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[f6f8580e-0927-4f67-9a77-78eff916ab60]]
I0910 09:48:33.529033  111732 pv_controller.go:1642] operation "provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[f6f8580e-0927-4f67-9a77-78eff916ab60]" is already running, skipping
I0910 09:48:33.530704  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-f6f8580e-0927-4f67-9a77-78eff916ab60: (1.463282ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:33.531130  111732 pv_controller.go:1476] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" created
I0910 09:48:33.531185  111732 pv_controller.go:1493] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: trying to save volume pvc-f6f8580e-0927-4f67-9a77-78eff916ab60
I0910 09:48:33.534728  111732 httplog.go:90] POST /api/v1/persistentvolumes: (3.072619ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:33.535083  111732 pv_controller.go:1501] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" saved
I0910 09:48:33.535119  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60", version 58567
I0910 09:48:33.535153  111732 pv_controller.go:1554] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" provisioned for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.535274  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" with version 58567
I0910 09:48:33.535317  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:33.535333  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:33.535353  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:33.535370  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:33.535407  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58566
I0910 09:48:33.535423  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:33.535461  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:33.535483  111732 pv_controller.go:931] binding volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.535498  111732 pv_controller.go:829] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.535513  111732 pv_controller.go:841] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.535524  111732 pv_controller.go:777] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: set phase Bound
I0910 09:48:33.535509  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"f6f8580e-0927-4f67-9a77-78eff916ab60", APIVersion:"v1", ResourceVersion:"58566", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f6f8580e-0927-4f67-9a77-78eff916ab60 using kubernetes.io/mock-provisioner
I0910 09:48:33.538135  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.534131ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:33.538531  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-f6f8580e-0927-4f67-9a77-78eff916ab60/status: (2.668119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.538758  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" with version 58569
I0910 09:48:33.538784  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" with version 58569
I0910 09:48:33.538806  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:33.538807  111732 pv_controller.go:798] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" entered phase "Bound"
I0910 09:48:33.538822  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60"
I0910 09:48:33.538822  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:33.538841  111732 pv_controller.go:901] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.538846  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:33.538869  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:33.542388  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.274753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.542707  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58570
I0910 09:48:33.542743  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: bound to "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60"
I0910 09:48:33.542759  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:33.546799  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision/status: (3.437226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.547392  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58571
I0910 09:48:33.547550  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" entered phase "Bound"
I0910 09:48:33.547714  111732 pv_controller.go:957] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.547829  111732 pv_controller.go:958] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:33.547951  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60", bindCompleted: true, boundByController: true
I0910 09:48:33.548117  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58571
I0910 09:48:33.548320  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Bound, bound to: "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60", bindCompleted: true, boundByController: true
I0910 09:48:33.548438  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:33.548549  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: claim is already correctly bound
I0910 09:48:33.548652  111732 pv_controller.go:931] binding volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.548754  111732 pv_controller.go:829] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.548871  111732 pv_controller.go:841] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.549000  111732 pv_controller.go:777] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: set phase Bound
I0910 09:48:33.549117  111732 pv_controller.go:780] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: phase Bound already set
I0910 09:48:33.549316  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60"
I0910 09:48:33.549446  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: already bound to "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60"
I0910 09:48:33.549596  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:33.549716  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Bound already set
I0910 09:48:33.549851  111732 pv_controller.go:957] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:33.549987  111732 pv_controller.go:958] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:33.550118  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60", bindCompleted: true, boundByController: true
I0910 09:48:33.567779  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.621567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.667367  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.257592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.767600  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.692776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.868926  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (3.701463ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:33.967655  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.66401ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.067476  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.416187ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.167464  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.503322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.267412  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.477249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.367528  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.4859ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.467637  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.631262ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.522401  111732 cache.go:669] Couldn't expire cache for pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned. Binding is still in progress.
I0910 09:48:34.525612  111732 scheduler_binder.go:545] All PVCs for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned" are bound
I0910 09:48:34.525716  111732 factory.go:610] Attempting to bind pod-i-pv-prebound-w-provisioned to node-1
I0910 09:48:34.529439  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned/binding: (3.123942ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.529815  111732 scheduler.go:667] pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0910 09:48:34.533108  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.772928ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.567148  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-pv-prebound-w-provisioned: (2.180413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.570101  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-i-pv-prebound: (2.144214ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.573342  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (2.375394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.576465  111732 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (2.453655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.586283  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (9.17516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.591702  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" deleted
I0910 09:48:34.591762  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" with version 58569
I0910 09:48:34.591805  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:34.591822  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:34.593098  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (993.065µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.593509  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:34.593529  111732 pv_controller.go:575] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" is released and reclaim policy "Delete" will be executed
I0910 09:48:34.593543  111732 pv_controller.go:777] updating PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: set phase Released
I0910 09:48:34.594136  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (7.278427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.594492  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" deleted
I0910 09:48:34.596639  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-f6f8580e-0927-4f67-9a77-78eff916ab60/status: (2.742856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.597141  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" with version 58578
I0910 09:48:34.597252  111732 pv_controller.go:798] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" entered phase "Released"
I0910 09:48:34.597270  111732 pv_controller.go:1022] reclaimVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: policy is Delete
I0910 09:48:34.597298  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-f6f8580e-0927-4f67-9a77-78eff916ab60[2a1d2e44-67bd-4a4b-b129-3e5d07a5432f]]
I0910 09:48:34.597347  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 58561
I0910 09:48:34.597378  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound (uid: 45f45037-5bc4-48a8-9b02-d013dab2878e)", boundByController: false
I0910 09:48:34.597393  111732 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound
I0910 09:48:34.597420  111732 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound not found
I0910 09:48:34.597435  111732 pv_controller.go:575] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0910 09:48:34.597446  111732 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Released
I0910 09:48:34.597470  111732 pv_controller.go:1146] deleteVolumeOperation [pvc-f6f8580e-0927-4f67-9a77-78eff916ab60] started
I0910 09:48:34.599509  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-f6f8580e-0927-4f67-9a77-78eff916ab60: (1.442204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.599760  111732 pv_controller.go:1250] isVolumeReleased[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume is released
I0910 09:48:34.599876  111732 pv_controller.go:1285] doDeleteVolume [pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]
I0910 09:48:34.599985  111732 pv_controller.go:1316] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" deleted
I0910 09:48:34.600124  111732 pv_controller.go:1193] deleteVolumeOperation [pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: success
I0910 09:48:34.600802  111732 store.go:362] GuaranteedUpdate of /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I0910 09:48:34.601230  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.403114ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.601712  111732 pv_controller.go:790] updating PersistentVolume[pv-i-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": StorageError: invalid object, Code: 4, Key: /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pv-i-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 87cf9fdb-0fcd-435a-bfe4-a0b084ef6d2f, UID in object meta: 
I0910 09:48:34.601858  111732 pv_controller_base.go:202] could not sync volume "pv-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": StorageError: invalid object, Code: 4, Key: /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pv-i-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 87cf9fdb-0fcd-435a-bfe4-a0b084ef6d2f, UID in object meta: 
I0910 09:48:34.602015  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" with version 58578
I0910 09:48:34.602201  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: phase: Released, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: f6f8580e-0927-4f67-9a77-78eff916ab60)", boundByController: true
I0910 09:48:34.602365  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:34.602475  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:34.602553  111732 pv_controller.go:1022] reclaimVolume[pvc-f6f8580e-0927-4f67-9a77-78eff916ab60]: policy is Delete
I0910 09:48:34.602643  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-f6f8580e-0927-4f67-9a77-78eff916ab60[2a1d2e44-67bd-4a4b-b129-3e5d07a5432f]]
I0910 09:48:34.602721  111732 pv_controller.go:1642] operation "delete-pvc-f6f8580e-0927-4f67-9a77-78eff916ab60[2a1d2e44-67bd-4a4b-b129-3e5d07a5432f]" is already running, skipping
I0910 09:48:34.602816  111732 pv_controller_base.go:212] volume "pv-i-prebound" deleted
I0910 09:48:34.602916  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-i-pv-prebound" was already processed
I0910 09:48:34.604361  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.789519ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.604654  111732 pv_controller_base.go:212] volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" deleted
I0910 09:48:34.604821  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" was already processed
I0910 09:48:34.604392  111732 store.go:228] deletion of /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pvc-f6f8580e-0927-4f67-9a77-78eff916ab60 failed because of a conflict, going to retry
I0910 09:48:34.605247  111732 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-f6f8580e-0927-4f67-9a77-78eff916ab60: (4.810551ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.605522  111732 pv_controller.go:1200] failed to delete volume "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" from database: persistentvolumes "pvc-f6f8580e-0927-4f67-9a77-78eff916ab60" not found
I0910 09:48:34.615736  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (10.589936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0910 09:48:34.616016  111732 volume_binding_test.go:751] Running test wait one pv prebound, one provisioned
I0910 09:48:34.618298  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.970917ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.620676  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.876262ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.623404  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.965897ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.627025  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-prebound", version 58587
I0910 09:48:34.627110  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: )", boundByController: false
I0910 09:48:34.627123  111732 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound
I0910 09:48:34.627131  111732 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0910 09:48:34.627227  111732 httplog.go:90] POST /api/v1/persistentvolumes: (3.197933ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.630130  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.519431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.630131  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (2.381635ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.630332  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound", version 58588
I0910 09:48:34.630362  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.630413  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: )", boundByController: false
I0910 09:48:34.630428  111732 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.630443  111732 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.630585  111732 pv_controller.go:849] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0910 09:48:34.630760  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 58589
I0910 09:48:34.630793  111732 pv_controller.go:798] volume "pv-w-prebound" entered phase "Available"
I0910 09:48:34.630818  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 58589
I0910 09:48:34.630838  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: )", boundByController: false
I0910 09:48:34.630844  111732 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound
I0910 09:48:34.630849  111732 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0910 09:48:34.630856  111732 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0910 09:48:34.633275  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.405176ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.633461  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (2.608321ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.633588  111732 pv_controller.go:852] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0910 09:48:34.633662  111732 pv_controller.go:934] error binding volume "pv-w-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0910 09:48:34.633686  111732 pv_controller_base.go:246] could not sync claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0910 09:48:34.634259  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58590
I0910 09:48:34.634470  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.634596  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:34.634696  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Pending
I0910 09:48:34.634792  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Pending already set
I0910 09:48:34.634841  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6", APIVersion:"v1", ResourceVersion:"58590", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:34.638130  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (3.621788ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.638618  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned
I0910 09:48:34.638645  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned
I0910 09:48:34.638729  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.62512ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.638940  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" on node "node-1"
I0910 09:48:34.638991  111732 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned" that has no matching volumes on node "node-1" ...
I0910 09:48:34.639087  111732 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned", node "node-1"
I0910 09:48:34.639136  111732 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-prebound", version 58589
I0910 09:48:34.639150  111732 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58590
I0910 09:48:34.639278  111732 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned", node "node-1"
I0910 09:48:34.639306  111732 scheduler_binder.go:399] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0910 09:48:34.642817  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (3.112729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.643238  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 58593
I0910 09:48:34.643268  111732 scheduler_binder.go:405] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.643304  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:34.643320  111732 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound
I0910 09:48:34.643346  111732 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.643366  111732 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0910 09:48:34.643411  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" with version 58588
I0910 09:48:34.643429  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.643468  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:34.643485  111732 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.643500  111732 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.643521  111732 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.643534  111732 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0910 09:48:34.646759  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.096133ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.647071  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 58595
I0910 09:48:34.647126  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:34.647144  111732 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound
I0910 09:48:34.647187  111732 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.647206  111732 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0910 09:48:34.647242  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.981983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.647610  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 58595
I0910 09:48:34.647640  111732 pv_controller.go:798] volume "pv-w-prebound" entered phase "Bound"
I0910 09:48:34.647652  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0910 09:48:34.647668  111732 pv_controller.go:901] volume "pv-w-prebound" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.650943  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-pv-prebound: (3.032827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.651290  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" with version 58596
I0910 09:48:34.651328  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0910 09:48:34.651392  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound] status: set phase Bound
I0910 09:48:34.654930  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-pv-prebound/status: (2.885018ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.655424  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" with version 58597
I0910 09:48:34.655466  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" entered phase "Bound"
I0910 09:48:34.655485  111732 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.655507  111732 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:34.655537  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0910 09:48:34.655579  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58594
I0910 09:48:34.655596  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.655629  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:34.655646  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:34.655663  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]]
I0910 09:48:34.655680  111732 pv_controller_base.go:526] storeObjectUpdate: ignoring claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" version 58596
I0910 09:48:34.655722  111732 pv_controller.go:1372] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] started, class: "wait-vzw6"
I0910 09:48:34.655775  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" with version 58597
I0910 09:48:34.655805  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0910 09:48:34.655852  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:34.655866  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: claim is already correctly bound
I0910 09:48:34.655880  111732 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.655894  111732 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.655942  111732 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.655955  111732 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0910 09:48:34.655965  111732 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0910 09:48:34.655977  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0910 09:48:34.656023  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0910 09:48:34.656036  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound] status: set phase Bound
I0910 09:48:34.656069  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound] status: phase Bound already set
I0910 09:48:34.656105  111732 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound"
I0910 09:48:34.656132  111732 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:34.656153  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0910 09:48:34.659026  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.023332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.659422  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58598
I0910 09:48:34.659474  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.659510  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:34.659522  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:34.659541  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]]
I0910 09:48:34.659551  111732 pv_controller.go:1642] operation "provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]" is already running, skipping
I0910 09:48:34.659422  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58598
I0910 09:48:34.661705  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6: (1.775ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.662356  111732 pv_controller.go:1476] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" created
I0910 09:48:34.662409  111732 pv_controller.go:1493] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: trying to save volume pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6
I0910 09:48:34.665412  111732 httplog.go:90] POST /api/v1/persistentvolumes: (2.672312ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.665967  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6", version 58599
I0910 09:48:34.666029  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:34.666067  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:34.666148  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.666242  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:34.666278  111732 pv_controller.go:1501] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" saved
I0910 09:48:34.666300  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" with version 58599
I0910 09:48:34.666329  111732 pv_controller.go:1554] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" provisioned for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.666308  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58598
I0910 09:48:34.666395  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.666436  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:34.666452  111732 pv_controller.go:931] binding volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.666467  111732 pv_controller.go:829] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.666460  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6", APIVersion:"v1", ResourceVersion:"58598", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6 using kubernetes.io/mock-provisioner
I0910 09:48:34.666493  111732 pv_controller.go:841] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.666598  111732 pv_controller.go:777] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: set phase Bound
I0910 09:48:34.670398  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.314857ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:34.670526  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6/status: (3.509362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.671075  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" with version 58601
I0910 09:48:34.671143  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:34.671178  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:34.671184  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" with version 58601
I0910 09:48:34.671196  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:34.671211  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:34.671208  111732 pv_controller.go:798] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" entered phase "Bound"
I0910 09:48:34.671227  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6"
I0910 09:48:34.671244  111732 pv_controller.go:901] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.676805  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (4.847221ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.677281  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58602
I0910 09:48:34.677324  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: bound to "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6"
I0910 09:48:34.677340  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:34.680581  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision/status: (2.919987ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.681359  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58603
I0910 09:48:34.681399  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" entered phase "Bound"
I0910 09:48:34.681416  111732 pv_controller.go:957] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.681446  111732 pv_controller.go:958] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:34.681462  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6", bindCompleted: true, boundByController: true
I0910 09:48:34.681518  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58603
I0910 09:48:34.681551  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Bound, bound to: "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6", bindCompleted: true, boundByController: true
I0910 09:48:34.681571  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:34.681591  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: claim is already correctly bound
I0910 09:48:34.681604  111732 pv_controller.go:931] binding volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.681664  111732 pv_controller.go:829] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.681685  111732 pv_controller.go:841] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.681697  111732 pv_controller.go:777] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: set phase Bound
I0910 09:48:34.681708  111732 pv_controller.go:780] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: phase Bound already set
I0910 09:48:34.681719  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6"
I0910 09:48:34.681743  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: already bound to "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6"
I0910 09:48:34.681756  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:34.681780  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Bound already set
I0910 09:48:34.681804  111732 pv_controller.go:957] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:34.681828  111732 pv_controller.go:958] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:34.681848  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6", bindCompleted: true, boundByController: true
I0910 09:48:34.741378  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.27889ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.842311  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.807233ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:34.941457  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.342722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.041935  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.695646ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.141291  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.27132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.241243  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.120315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.341268  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.225995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.442055  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.826831ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.522684  111732 cache.go:669] Couldn't expire cache for pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned. Binding is still in progress.
I0910 09:48:35.529065  111732 httplog.go:90] GET /api/v1/namespaces/default: (2.211509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.531646  111732 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.783376ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.534262  111732 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.87178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.541194  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.174231ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.641280  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.222429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.647781  111732 scheduler_binder.go:545] All PVCs for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned" are bound
I0910 09:48:35.647881  111732 factory.go:610] Attempting to bind pod-w-pv-prebound-w-provisioned to node-1
I0910 09:48:35.651745  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned/binding: (3.34322ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.652322  111732 scheduler.go:667] pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-w-pv-prebound-w-provisioned is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0910 09:48:35.655556  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.680368ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.741556  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-w-pv-prebound-w-provisioned: (2.425649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.744461  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-w-pv-prebound: (2.068331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.747016  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (1.961924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.750117  111732 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (2.062538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.759407  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (8.4177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.767091  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" deleted
I0910 09:48:35.767297  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" with version 58601
I0910 09:48:35.767344  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:35.767366  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:35.769640  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (1.933954ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:35.770126  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (9.768579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.770218  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:35.770391  111732 pv_controller.go:575] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" is released and reclaim policy "Delete" will be executed
I0910 09:48:35.770408  111732 pv_controller.go:777] updating PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: set phase Released
I0910 09:48:35.771465  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" deleted
I0910 09:48:35.773434  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6/status: (2.633343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.773757  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" with version 58610
I0910 09:48:35.773785  111732 pv_controller.go:798] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" entered phase "Released"
I0910 09:48:35.773801  111732 pv_controller.go:1022] reclaimVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: policy is Delete
I0910 09:48:35.773827  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6[6026cfc1-794e-494e-b503-519cd992bd39]]
I0910 09:48:35.773861  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 58595
I0910 09:48:35.773889  111732 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound (uid: 5782cc17-15d5-4c9e-8516-d1d90ddf6127)", boundByController: false
I0910 09:48:35.773903  111732 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound
I0910 09:48:35.773925  111732 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound not found
I0910 09:48:35.773940  111732 pv_controller.go:575] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0910 09:48:35.773954  111732 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Released
I0910 09:48:35.773994  111732 pv_controller.go:1146] deleteVolumeOperation [pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6] started
I0910 09:48:35.776335  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6: (1.557372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.776605  111732 pv_controller.go:1250] isVolumeReleased[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume is released
I0910 09:48:35.776629  111732 pv_controller.go:1285] doDeleteVolume [pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]
I0910 09:48:35.776661  111732 pv_controller.go:1316] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" deleted
I0910 09:48:35.776672  111732 pv_controller.go:1193] deleteVolumeOperation [pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: success
I0910 09:48:35.778902  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (4.144428ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.779294  111732 pv_controller.go:790] updating PersistentVolume[pv-w-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a599a0d3-63f0-4698-89ff-4741dde372fc, UID in object meta: 
I0910 09:48:35.779338  111732 pv_controller_base.go:202] could not sync volume "pv-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": StorageError: invalid object, Code: 4, Key: /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pv-w-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a599a0d3-63f0-4698-89ff-4741dde372fc, UID in object meta: 
I0910 09:48:35.779388  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" with version 58610
I0910 09:48:35.779426  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: phase: Released, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6)", boundByController: true
I0910 09:48:35.779442  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:35.779468  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:35.779483  111732 pv_controller.go:1022] reclaimVolume[pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6]: policy is Delete
I0910 09:48:35.779506  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6[6026cfc1-794e-494e-b503-519cd992bd39]]
I0910 09:48:35.779516  111732 pv_controller.go:1642] operation "delete-pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6[6026cfc1-794e-494e-b503-519cd992bd39]" is already running, skipping
I0910 09:48:35.779538  111732 pv_controller_base.go:212] volume "pv-w-prebound" deleted
I0910 09:48:35.779574  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-pv-prebound" was already processed
I0910 09:48:35.781454  111732 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6: (4.561539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.781838  111732 pv_controller_base.go:212] volume "pvc-ab1aed13-b0bd-4268-b7a8-2b3dd6bf55b6" deleted
I0910 09:48:35.781898  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" was already processed
I0910 09:48:35.782546  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (11.76183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:35.796576  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (13.483574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0910 09:48:35.797267  111732 volume_binding_test.go:751] Running test immediate provisioned by controller
I0910 09:48:35.800005  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.368796ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.803098  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.194044ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.805792  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.902542ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.808882  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (2.176749ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.809349  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned", version 58619
I0910 09:48:35.809391  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:35.809419  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: no volume found
I0910 09:48:35.809430  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: started
I0910 09:48:35.809449  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned[ebad9926-0c12-46c1-a4b5-047edc9885c1]]
I0910 09:48:35.809508  111732 pv_controller.go:1372] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned] started, class: "immediate-gxbv"
I0910 09:48:35.812670  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-controller-provisioned: (2.858883ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.813252  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" with version 58621
I0910 09:48:35.813336  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" with version 58621
I0910 09:48:35.813366  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:35.813396  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: no volume found
I0910 09:48:35.813406  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: started
I0910 09:48:35.813425  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned[ebad9926-0c12-46c1-a4b5-047edc9885c1]]
I0910 09:48:35.813434  111732 pv_controller.go:1642] operation "provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned[ebad9926-0c12-46c1-a4b5-047edc9885c1]" is already running, skipping
I0910 09:48:35.813613  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (4.199785ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.814380  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound
I0910 09:48:35.814415  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound
E0910 09:48:35.814623  111732 factory.go:561] Error scheduling volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound: pod has unbound immediate PersistentVolumeClaims; retrying
I0910 09:48:35.814669  111732 factory.go:619] Updating pod condition for volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound to (PodScheduled==False, Reason=Unschedulable)
I0910 09:48:35.815841  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1: (2.328923ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.816429  111732 pv_controller.go:1476] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" created
I0910 09:48:35.816470  111732 pv_controller.go:1493] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: trying to save volume pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1
I0910 09:48:35.816931  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (1.383416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45192]
I0910 09:48:35.817761  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound/status: (2.325364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.817951  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.412533ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45194]
E0910 09:48:35.818033  111732 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims
I0910 09:48:35.820081  111732 httplog.go:90] POST /api/v1/persistentvolumes: (1.967988ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45192]
I0910 09:48:35.820504  111732 pv_controller.go:1501] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" saved
I0910 09:48:35.820667  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1", version 58624
I0910 09:48:35.820798  111732 pv_controller.go:1554] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" provisioned for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.821000  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" with version 58624
I0910 09:48:35.821045  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:35.821031  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-controller-provisioned", UID:"ebad9926-0c12-46c1-a4b5-047edc9885c1", APIVersion:"v1", ResourceVersion:"58621", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1 using kubernetes.io/mock-provisioner
I0910 09:48:35.821069  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned
I0910 09:48:35.821112  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:35.821132  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:35.821233  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" with version 58621
I0910 09:48:35.821264  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:35.821314  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:35.821342  111732 pv_controller.go:931] binding volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.821359  111732 pv_controller.go:829] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.821379  111732 pv_controller.go:841] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.821391  111732 pv_controller.go:777] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: set phase Bound
I0910 09:48:35.824037  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.275237ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:35.825406  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1/status: (3.637153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.825938  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" with version 58626
I0910 09:48:35.826116  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:35.826295  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned
I0910 09:48:35.826390  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:35.826499  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:35.825904  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" with version 58626
I0910 09:48:35.826619  111732 pv_controller.go:798] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" entered phase "Bound"
I0910 09:48:35.826635  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: binding to "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1"
I0910 09:48:35.826672  111732 pv_controller.go:901] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.829895  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-controller-provisioned: (2.91898ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.830461  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" with version 58627
I0910 09:48:35.830503  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: bound to "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1"
I0910 09:48:35.830518  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned] status: set phase Bound
I0910 09:48:35.834088  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-controller-provisioned/status: (3.227945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:35.835046  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" with version 58628
I0910 09:48:35.835097  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" entered phase "Bound"
I0910 09:48:35.835123  111732 pv_controller.go:957] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.835186  111732 pv_controller.go:958] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:35.835210  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1", bindCompleted: true, boundByController: true
I0910 09:48:35.835275  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" with version 58628
I0910 09:48:35.835295  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: phase: Bound, bound to: "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1", bindCompleted: true, boundByController: true
I0910 09:48:35.835318  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:35.835337  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: claim is already correctly bound
I0910 09:48:35.835350  111732 pv_controller.go:931] binding volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.835369  111732 pv_controller.go:829] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.835393  111732 pv_controller.go:841] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.835410  111732 pv_controller.go:777] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: set phase Bound
I0910 09:48:35.835420  111732 pv_controller.go:780] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: phase Bound already set
I0910 09:48:35.835431  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: binding to "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1"
I0910 09:48:35.835454  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned]: already bound to "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1"
I0910 09:48:35.835467  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned] status: set phase Bound
I0910 09:48:35.835493  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned] status: phase Bound already set
I0910 09:48:35.835509  111732 pv_controller.go:957] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned"
I0910 09:48:35.835536  111732 pv_controller.go:958] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:35.835558  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" status after binding: phase: Bound, bound to: "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1", bindCompleted: true, boundByController: true
I0910 09:48:35.917023  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.47024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.017413  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.821411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.117202  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.683465ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.216923  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.328627ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.316954  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.413569ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.417137  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.446554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.518459  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (3.581492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.617312  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.45242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.716598  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.082255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.817083  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.512208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:36.917148  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.39515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.017624  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (3.026216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.117534  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.752806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.216670  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.236301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.317272  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.691187ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.417385  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.580942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.519023  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (4.502195ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.520537  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound
I0910 09:48:37.520569  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound
I0910 09:48:37.520801  111732 scheduler_binder.go:651] All bound volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound" match with Node "node-1"
I0910 09:48:37.520880  111732 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound", node "node-1"
I0910 09:48:37.520898  111732 scheduler_binder.go:266] AssumePodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound", node "node-1": all PVCs bound and nothing to do
I0910 09:48:37.520949  111732 factory.go:610] Attempting to bind pod-i-unbound to node-1
I0910 09:48:37.523471  111732 cache.go:669] Couldn't expire cache for pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound. Binding is still in progress.
I0910 09:48:37.525432  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound/binding: (4.103329ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.526120  111732 scheduler.go:667] pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-i-unbound is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0910 09:48:37.530239  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.121858ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.617046  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-i-unbound: (2.495687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.620994  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-controller-provisioned: (3.314933ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.628917  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (7.247957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.637263  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (7.046828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.639037  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" deleted
I0910 09:48:37.639100  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" with version 58626
I0910 09:48:37.639144  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:37.639175  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned
I0910 09:48:37.642145  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-controller-provisioned: (2.345692ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.642578  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned not found
I0910 09:48:37.642603  111732 pv_controller.go:575] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" is released and reclaim policy "Delete" will be executed
I0910 09:48:37.642620  111732 pv_controller.go:777] updating PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: set phase Released
I0910 09:48:37.647088  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1/status: (4.110923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.647611  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" with version 58634
I0910 09:48:37.647656  111732 pv_controller.go:798] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" entered phase "Released"
I0910 09:48:37.647674  111732 pv_controller.go:1022] reclaimVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: policy is Delete
I0910 09:48:37.647702  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1[4fdb0ae1-4069-42fa-97c8-71fc0bb13d4a]]
I0910 09:48:37.647755  111732 pv_controller.go:1146] deleteVolumeOperation [pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1] started
I0910 09:48:37.648035  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" with version 58634
I0910 09:48:37.648086  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: phase: Released, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned (uid: ebad9926-0c12-46c1-a4b5-047edc9885c1)", boundByController: true
I0910 09:48:37.648117  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned
I0910 09:48:37.648147  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned not found
I0910 09:48:37.648175  111732 pv_controller.go:1022] reclaimVolume[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: policy is Delete
I0910 09:48:37.648196  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1[4fdb0ae1-4069-42fa-97c8-71fc0bb13d4a]]
I0910 09:48:37.648205  111732 pv_controller.go:1642] operation "delete-pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1[4fdb0ae1-4069-42fa-97c8-71fc0bb13d4a]" is already running, skipping
I0910 09:48:37.650243  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1: (2.199413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.656320  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (16.398501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.656561  111732 pv_controller_base.go:212] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" deleted
I0910 09:48:37.656615  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-controller-provisioned" was already processed
I0910 09:48:37.657533  111732 pv_controller.go:1250] isVolumeReleased[pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: volume is released
I0910 09:48:37.657691  111732 pv_controller.go:1285] doDeleteVolume [pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]
I0910 09:48:37.658329  111732 pv_controller.go:1316] volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" deleted
I0910 09:48:37.663547  111732 pv_controller.go:1193] deleteVolumeOperation [pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1]: success
I0910 09:48:37.665355  111732 httplog.go:90] DELETE /api/v1/persistentvolumes/pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1: (1.516299ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.665670  111732 pv_controller.go:1200] failed to delete volume "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" from database: persistentvolumes "pvc-ebad9926-0c12-46c1-a4b5-047edc9885c1" not found
I0910 09:48:37.678152  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (18.133238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.678771  111732 volume_binding_test.go:751] Running test wait provisioned
I0910 09:48:37.683585  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (4.307317ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.687725  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.468877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.691296  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.87328ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.695675  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (3.456229ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.696519  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58642
I0910 09:48:37.696552  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:37.696581  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:37.696608  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Pending
I0910 09:48:37.696625  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Pending already set
I0910 09:48:37.696843  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"50f692f7-bfd9-4781-9efc-625e4aae7619", APIVersion:"v1", ResourceVersion:"58642", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:37.701054  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (4.466043ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.701876  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (4.942544ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.702679  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision
I0910 09:48:37.702702  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision
I0910 09:48:37.702907  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" on node "node-1"
I0910 09:48:37.702930  111732 scheduler_binder.go:733] Provisioning for claims of pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision" that has no matching volumes on node "node-1" ...
I0910 09:48:37.702997  111732 scheduler_binder.go:256] AssumePodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision", node "node-1"
I0910 09:48:37.703048  111732 scheduler_assume_cache.go:320] Assumed v1.PersistentVolumeClaim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision", version 58642
I0910 09:48:37.703104  111732 scheduler_binder.go:331] BindPodVolumes for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision", node "node-1"
I0910 09:48:37.707475  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.780689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.708095  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58645
I0910 09:48:37.708150  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:37.708203  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:37.708217  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:37.708247  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[50f692f7-bfd9-4781-9efc-625e4aae7619]]
I0910 09:48:37.708327  111732 pv_controller.go:1372] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] started, class: "wait-z2s9"
I0910 09:48:37.712650  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (3.875824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.712999  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58646
I0910 09:48:37.714731  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58646
I0910 09:48:37.714768  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:37.714797  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: no volume found
I0910 09:48:37.714806  111732 pv_controller.go:1326] provisionClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: started
I0910 09:48:37.714825  111732 pv_controller.go:1631] scheduleOperation[provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[50f692f7-bfd9-4781-9efc-625e4aae7619]]
I0910 09:48:37.714833  111732 pv_controller.go:1642] operation "provision-volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision[50f692f7-bfd9-4781-9efc-625e4aae7619]" is already running, skipping
I0910 09:48:37.715095  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-50f692f7-bfd9-4781-9efc-625e4aae7619: (1.737832ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.715536  111732 pv_controller.go:1476] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" created
I0910 09:48:37.715697  111732 pv_controller.go:1493] provisionClaimOperation [volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: trying to save volume pvc-50f692f7-bfd9-4781-9efc-625e4aae7619
I0910 09:48:37.721485  111732 httplog.go:90] POST /api/v1/persistentvolumes: (5.316061ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.722062  111732 pv_controller.go:1501] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" saved
I0910 09:48:37.722113  111732 pv_controller_base.go:502] storeObjectUpdate: adding volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619", version 58647
I0910 09:48:37.722148  111732 pv_controller.go:1554] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" provisioned for claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.722366  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-canprovision", UID:"50f692f7-bfd9-4781-9efc-625e4aae7619", APIVersion:"v1", ResourceVersion:"58646", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-50f692f7-bfd9-4781-9efc-625e4aae7619 using kubernetes.io/mock-provisioner
I0910 09:48:37.723079  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" with version 58647
I0910 09:48:37.723133  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:37.723149  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:37.723337  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:37.723360  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:37.723411  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58646
I0910 09:48:37.723429  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:37.723469  111732 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" found: phase: Pending, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:37.723484  111732 pv_controller.go:931] binding volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.723499  111732 pv_controller.go:829] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.723518  111732 pv_controller.go:841] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.723529  111732 pv_controller.go:777] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: set phase Bound
I0910 09:48:37.727225  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (4.349763ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:37.727722  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-50f692f7-bfd9-4781-9efc-625e4aae7619/status: (3.865944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.727989  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" with version 58649
I0910 09:48:37.728051  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:37.728079  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:37.728108  111732 pv_controller.go:555] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:37.728127  111732 pv_controller.go:603] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: volume not bound yet, waiting for syncClaim to fix it
I0910 09:48:37.728220  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" with version 58649
I0910 09:48:37.728251  111732 pv_controller.go:798] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" entered phase "Bound"
I0910 09:48:37.728279  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619"
I0910 09:48:37.728304  111732 pv_controller.go:901] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.733805  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (5.118088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.734540  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58650
I0910 09:48:37.734584  111732 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: bound to "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619"
I0910 09:48:37.734598  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:37.738335  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision/status: (3.420862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.738701  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58651
I0910 09:48:37.738733  111732 pv_controller.go:742] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" entered phase "Bound"
I0910 09:48:37.738760  111732 pv_controller.go:957] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.738791  111732 pv_controller.go:958] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:37.738809  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619", bindCompleted: true, boundByController: true
I0910 09:48:37.738867  111732 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" with version 58651
I0910 09:48:37.738885  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: phase: Bound, bound to: "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619", bindCompleted: true, boundByController: true
I0910 09:48:37.738903  111732 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" found: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:37.738917  111732 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: claim is already correctly bound
I0910 09:48:37.738927  111732 pv_controller.go:931] binding volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.738940  111732 pv_controller.go:829] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: binding to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.738965  111732 pv_controller.go:841] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: already bound to "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.738977  111732 pv_controller.go:777] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: set phase Bound
I0910 09:48:37.738988  111732 pv_controller.go:780] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: phase Bound already set
I0910 09:48:37.738998  111732 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: binding to "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619"
I0910 09:48:37.739023  111732 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision]: already bound to "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619"
I0910 09:48:37.739034  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: set phase Bound
I0910 09:48:37.739054  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision] status: phase Bound already set
I0910 09:48:37.739072  111732 pv_controller.go:957] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" bound to claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision"
I0910 09:48:37.739099  111732 pv_controller.go:958] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" status after binding: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:37.739118  111732 pv_controller.go:959] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" status after binding: phase: Bound, bound to: "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619", bindCompleted: true, boundByController: true
I0910 09:48:37.805840  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.358976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:37.908220  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (4.500085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.006314  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.906273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.105936  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.554574ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.205281  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.072361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.306399  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (3.03631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.405676  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.229809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.505722  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.446147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.523682  111732 cache.go:669] Couldn't expire cache for pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision. Binding is still in progress.
I0910 09:48:38.605565  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.229973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.705498  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.191924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.711063  111732 scheduler_binder.go:545] All PVCs for pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision" are bound
I0910 09:48:38.711180  111732 factory.go:610] Attempting to bind pod-pvc-canprovision to node-1
I0910 09:48:38.714538  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision/binding: (2.877993ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.714867  111732 scheduler.go:667] pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canprovision is bound successfully on node "node-1", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0910 09:48:38.717567  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.253924ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.806111  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-canprovision: (2.756994ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.809137  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (1.811792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.816596  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (6.777894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.823881  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" deleted
I0910 09:48:38.824223  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" with version 58649
I0910 09:48:38.824340  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: phase: Bound, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:38.824442  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:38.824384  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (6.859573ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.826625  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-canprovision: (1.700615ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:38.826949  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:38.826974  111732 pv_controller.go:575] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" is released and reclaim policy "Delete" will be executed
I0910 09:48:38.826987  111732 pv_controller.go:777] updating PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: set phase Released
I0910 09:48:38.829730  111732 httplog.go:90] PUT /api/v1/persistentvolumes/pvc-50f692f7-bfd9-4781-9efc-625e4aae7619/status: (2.290385ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:38.829991  111732 store.go:228] deletion of /6d913918-4c97-43a7-aba3-b1e9b757cc58/persistentvolumes/pvc-50f692f7-bfd9-4781-9efc-625e4aae7619 failed because of a conflict, going to retry
I0910 09:48:38.830142  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" with version 58657
I0910 09:48:38.830263  111732 pv_controller.go:798] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" entered phase "Released"
I0910 09:48:38.830281  111732 pv_controller.go:1022] reclaimVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: policy is Delete
I0910 09:48:38.830316  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-50f692f7-bfd9-4781-9efc-625e4aae7619[3b2f5c0b-107b-45e9-8d09-fbe074da84fa]]
I0910 09:48:38.830380  111732 pv_controller.go:1146] deleteVolumeOperation [pvc-50f692f7-bfd9-4781-9efc-625e4aae7619] started
I0910 09:48:38.830614  111732 pv_controller_base.go:530] storeObjectUpdate updating volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" with version 58657
I0910 09:48:38.830665  111732 pv_controller.go:489] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: phase: Released, bound to: "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision (uid: 50f692f7-bfd9-4781-9efc-625e4aae7619)", boundByController: true
I0910 09:48:38.830679  111732 pv_controller.go:514] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: volume is bound to claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision
I0910 09:48:38.830706  111732 pv_controller.go:547] synchronizing PersistentVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: claim volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision not found
I0910 09:48:38.830717  111732 pv_controller.go:1022] reclaimVolume[pvc-50f692f7-bfd9-4781-9efc-625e4aae7619]: policy is Delete
I0910 09:48:38.830737  111732 pv_controller.go:1631] scheduleOperation[delete-pvc-50f692f7-bfd9-4781-9efc-625e4aae7619[3b2f5c0b-107b-45e9-8d09-fbe074da84fa]]
I0910 09:48:38.830746  111732 pv_controller.go:1642] operation "delete-pvc-50f692f7-bfd9-4781-9efc-625e4aae7619[3b2f5c0b-107b-45e9-8d09-fbe074da84fa]" is already running, skipping
I0910 09:48:38.832200  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.976234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.832443  111732 pv_controller_base.go:212] volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" deleted
I0910 09:48:38.832481  111732 httplog.go:90] GET /api/v1/persistentvolumes/pvc-50f692f7-bfd9-4781-9efc-625e4aae7619: (1.686572ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:38.832496  111732 pv_controller_base.go:396] deletion of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-canprovision" was already processed
I0910 09:48:38.832707  111732 pv_controller.go:1153] error reading persistent volume "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619": persistentvolumes "pvc-50f692f7-bfd9-4781-9efc-625e4aae7619" not found
I0910 09:48:38.844632  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (12.063267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.845266  111732 volume_binding_test.go:751] Running test topolgy unsatisfied
I0910 09:48:38.847699  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.066349ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.850505  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.080317ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.854976  111732 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.932006ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.859284  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (3.341312ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.859534  111732 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch", version 58665
I0910 09:48:38.859572  111732 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0910 09:48:38.859604  111732 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch]: no volume found
I0910 09:48:38.859627  111732 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch] status: set phase Pending
I0910 09:48:38.859641  111732 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch] status: phase Pending already set
I0910 09:48:38.859881  111732 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd", Name:"pvc-topomismatch", UID:"8c61fdb9-56ff-487c-918a-de739e36b940", APIVersion:"v1", ResourceVersion:"58665", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0910 09:48:38.863135  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.130868ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.863613  111732 httplog.go:90] POST /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (3.457115ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:38.863818  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch
I0910 09:48:38.863844  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch
I0910 09:48:38.864004  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch" on node "node-1"
I0910 09:48:38.864051  111732 scheduler_binder.go:723] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch"
I0910 09:48:38.864109  111732 factory.go:545] Unable to schedule volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0910 09:48:38.864225  111732 factory.go:619] Updating pod condition for volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0910 09:48:38.868126  111732 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-topomismatch/status: (3.423606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:38.868497  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-topomismatch: (3.914476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
E0910 09:48:38.868821  111732 factory.go:585] pod is already present in the activeQ
I0910 09:48:38.869028  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (3.899066ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45196]
I0910 09:48:38.871433  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-topomismatch: (2.19501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44826]
I0910 09:48:38.872016  111732 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch on any node.
I0910 09:48:38.872144  111732 scheduling_queue.go:830] About to try and schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch
I0910 09:48:38.872243  111732 scheduler.go:530] Attempting to schedule pod: volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch
I0910 09:48:38.872679  111732 scheduler_binder.go:678] No matching volumes for Pod "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch", PVC "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch" on node "node-1"
I0910 09:48:38.872834  111732 scheduler_binder.go:723] Node "node-1" cannot satisfy provisioning topology requirements of claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch"
I0910 09:48:38.872993  111732 factory.go:545] Unable to schedule volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch: no fit: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.; waiting
I0910 09:48:38.873149  111732 factory.go:619] Updating pod condition for volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch to (PodScheduled==False, Reason=Unschedulable)
I0910 09:48:38.876210  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-topomismatch: (2.231894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45196]
I0910 09:48:38.876737  111732 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/events: (2.7788ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.877004  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-topomismatch: (2.170377ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45198]
I0910 09:48:38.877445  111732 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-topomismatch on any node.
I0910 09:48:38.968350  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods/pod-pvc-topomismatch: (3.458792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.971981  111732 httplog.go:90] GET /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims/pvc-topomismatch: (2.705041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.983400  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (10.653452ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.991363  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (6.574263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:38.991840  111732 pv_controller_base.go:258] claim "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-topomismatch" deleted
I0910 09:48:38.995085  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.764079ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.011828  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.606183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.012498  111732 volume_binding_test.go:932] test cluster "volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd" start to tear down
I0910 09:48:39.019296  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pods: (6.302106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.025193  111732 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/persistentvolumeclaims: (3.522894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.029654  111732 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.643923ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.036940  111732 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (3.837813ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.038234  111732 pv_controller_base.go:298] Shutting down persistent volume controller
I0910 09:48:39.038292  111732 pv_controller_base.go:409] claim worker queue shutting down
E0910 09:48:39.038469  111732 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0910 09:48:39.038556  111732 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=58006&timeout=5m54s&timeoutSeconds=354&watch=true: (22.311641838s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44676]
I0910 09:48:39.038569  111732 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=58006&timeout=9m52s&timeoutSeconds=592&watch=true: (23.514950554s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44666]
I0910 09:48:39.038754  111732 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=58006&timeout=7m7s&timeoutSeconds=427&watch=true: (22.311591279s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44682]
I0910 09:48:39.038773  111732 pv_controller_base.go:352] volume worker queue shutting down
I0910 09:48:39.038833  111732 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=58006&timeout=6m37s&timeoutSeconds=397&watch=true: (22.311906961s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44678]
I0910 09:48:39.038833  111732 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=58006&timeout=7m23s&timeoutSeconds=443&watch=true: (23.5153826s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44664]
I0910 09:48:39.038923  111732 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=58006&timeout=7m48s&timeoutSeconds=468&watch=true: (23.515463179s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44662]
I0910 09:48:39.038941  111732 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=58006&timeout=5m37s&timeoutSeconds=337&watch=true: (23.515875297s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44660]
I0910 09:48:39.039072  111732 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=58006&timeout=9m41s&timeoutSeconds=581&watch=true: (22.311939059s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44684]
I0910 09:48:39.039246  111732 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=58008&timeout=5m8s&timeoutSeconds=308&watch=true: (22.311963258s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44688]
I0910 09:48:39.039323  111732 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=58008&timeout=7m44s&timeoutSeconds=464&watch=true: (23.515319986s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44650]
I0910 09:48:39.039405  111732 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=58006&timeout=6m42s&timeoutSeconds=402&watch=true: (23.515930584s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44656]
I0910 09:48:39.039489  111732 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=58008&timeout=5m1s&timeoutSeconds=301&watch=true: (23.515897325s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44658]
I0910 09:48:39.039572  111732 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=58007&timeout=7m22s&timeoutSeconds=442&watch=true: (23.516623337s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44506]
I0910 09:48:39.039611  111732 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=58006&timeout=8m35s&timeoutSeconds=515&watch=true: (23.516809744s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44652]
I0910 09:48:39.039615  111732 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=58009&timeout=7m16s&timeoutSeconds=436&watch=true: (23.517122525s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44670]
I0910 09:48:39.039982  111732 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=58009&timeout=9m49s&timeoutSeconds=589&watch=true: (23.516117287s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44508]
I0910 09:48:39.048793  111732 httplog.go:90] DELETE /api/v1/nodes: (11.022705ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.049528  111732 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0910 09:48:39.054725  111732 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (4.477362ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0910 09:48:39.061082  111732 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (5.219421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
W0910 09:48:39.063954  111732 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0910 09:48:39.064284  111732 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
--- FAIL: TestVolumeProvision (27.03s)
    volume_binding_test.go:1149: Provisoning annotaion on PVC volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind not bahaviors as expected: PVC volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pvc-w-canbind not expected to be provisioned, but found selected-node annotation
    volume_binding_test.go:1191: PV pv-w-canbind phase not Bound, got Available

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190910-093518.xml

Find volume-scheduling5ab9135e-3965-42a3-b703-df1abffed8bd/pod-pvc-canbind-or-provision mentions in log files | View test history on testgrid


Show 2863 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 812 lines ...
W0910 09:29:51.597] I0910 09:29:51.596519   53412 shared_informer.go:197] Waiting for caches to sync for service account
W0910 09:29:51.598] I0910 09:29:51.597026   53412 controllermanager.go:534] Started "deployment"
W0910 09:29:51.598] W0910 09:29:51.597109   53412 controllermanager.go:526] Skipping "nodeipam"
W0910 09:29:51.598] I0910 09:29:51.597199   53412 deployment_controller.go:152] Starting deployment controller
W0910 09:29:51.599] I0910 09:29:51.597294   53412 shared_informer.go:197] Waiting for caches to sync for deployment
W0910 09:29:51.599] I0910 09:29:51.598256   53412 node_lifecycle_controller.go:77] Sending events to api server
W0910 09:29:51.599] E0910 09:29:51.598326   53412 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0910 09:29:51.599] W0910 09:29:51.598345   53412 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0910 09:29:51.600] I0910 09:29:51.599021   53412 controllermanager.go:534] Started "pvc-protection"
W0910 09:29:51.600] I0910 09:29:51.599122   53412 pvc_protection_controller.go:100] Starting PVC protection controller
W0910 09:29:51.600] I0910 09:29:51.599218   53412 shared_informer.go:197] Waiting for caches to sync for PVC protection
W0910 09:29:51.600] I0910 09:29:51.599763   53412 controllermanager.go:534] Started "replicaset"
W0910 09:29:51.601] I0910 09:29:51.599943   53412 replica_set.go:182] Starting replicaset controller
... skipping 4 lines ...
W0910 09:29:51.602] I0910 09:29:51.600709   53412 controllermanager.go:534] Started "pv-protection"
W0910 09:29:51.602] I0910 09:29:51.601738   53412 controllermanager.go:534] Started "cronjob"
W0910 09:29:51.602] W0910 09:29:51.602038   53412 controllermanager.go:526] Skipping "csrsigning"
W0910 09:29:51.603] I0910 09:29:51.600717   53412 pv_protection_controller.go:81] Starting PV protection controller
W0910 09:29:51.603] I0910 09:29:51.602515   53412 shared_informer.go:197] Waiting for caches to sync for PV protection
W0910 09:29:51.603] I0910 09:29:51.602684   53412 cronjob_controller.go:96] Starting CronJob Manager
W0910 09:29:51.604] E0910 09:29:51.603789   53412 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0910 09:29:51.605] W0910 09:29:51.605050   53412 controllermanager.go:526] Skipping "service"
W0910 09:29:51.606] W0910 09:29:51.606117   53412 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0910 09:29:51.607] I0910 09:29:51.607315   53412 controllermanager.go:534] Started "attachdetach"
W0910 09:29:51.608] I0910 09:29:51.607995   53412 controllermanager.go:534] Started "job"
W0910 09:29:51.609] I0910 09:29:51.608753   53412 controllermanager.go:534] Started "csrapproving"
W0910 09:29:51.609] W0910 09:29:51.608938   53412 controllermanager.go:513] "bootstrapsigner" is disabled
... skipping 68 lines ...
W0910 09:29:52.233] I0910 09:29:52.231456   53412 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0910 09:29:52.233] I0910 09:29:52.232668   53412 controllermanager.go:534] Started "daemonset"
W0910 09:29:52.233] I0910 09:29:52.232765   53412 daemon_controller.go:267] Starting daemon sets controller
W0910 09:29:52.233] I0910 09:29:52.232807   53412 shared_informer.go:197] Waiting for caches to sync for daemon sets
W0910 09:29:52.233] I0910 09:29:52.233483   53412 controllermanager.go:534] Started "csrcleaner"
W0910 09:29:52.234] I0910 09:29:52.234145   53412 cleaner.go:81] Starting CSR cleaner controller
W0910 09:29:52.275] W0910 09:29:52.274075   53412 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0910 09:29:52.302] I0910 09:29:52.301309   53412 shared_informer.go:204] Caches are synced for TTL 
W0910 09:29:52.303] I0910 09:29:52.302794   53412 shared_informer.go:204] Caches are synced for PV protection 
W0910 09:29:52.311] I0910 09:29:52.311197   53412 shared_informer.go:204] Caches are synced for certificate 
W0910 09:29:52.318] I0910 09:29:52.317262   53412 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0910 09:29:52.348] E0910 09:29:52.348056   53412 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0910 09:29:52.351] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0910 09:29:52.361] E0910 09:29:52.359715   53412 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0910 09:29:52.483] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0910 09:29:52.484] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   56s
I0910 09:29:52.489] Recording: run_kubectl_version_tests
I0910 09:29:52.489] Running command: run_kubectl_version_tests
I0910 09:29:52.522] 
I0910 09:29:52.526] +++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 99 lines ...
I0910 09:29:56.780] +++ working dir: /go/src/k8s.io/kubernetes
I0910 09:29:56.784] +++ command: run_RESTMapper_evaluation_tests
I0910 09:29:56.798] +++ [0910 09:29:56] Creating namespace namespace-1568107796-9274
I0910 09:29:56.907] namespace/namespace-1568107796-9274 created
I0910 09:29:57.000] Context "test" modified.
I0910 09:29:57.009] +++ [0910 09:29:57] Testing RESTMapper
I0910 09:29:57.144] +++ [0910 09:29:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0910 09:29:57.163] +++ exit code: 0
I0910 09:29:57.317] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0910 09:29:57.317] bindings                                                                      true         Binding
I0910 09:29:57.318] componentstatuses                 cs                                          false        ComponentStatus
I0910 09:29:57.318] configmaps                        cm                                          true         ConfigMap
I0910 09:29:57.318] endpoints                         ep                                          true         Endpoints
... skipping 595 lines ...
I0910 09:30:19.690] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:30:19.887] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:30:20.001] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:30:20.210] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:30:20.332] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:30:20.445] (Bpod "valid-pod" force deleted
W0910 09:30:20.546] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0910 09:30:20.547] error: setting 'all' parameter but found a non empty selector. 
W0910 09:30:20.548] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0910 09:30:20.649] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:30:20.702] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0910 09:30:20.789] (Bnamespace/test-kubectl-describe-pod created
I0910 09:30:20.904] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0910 09:30:21.007] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0910 09:30:22.248] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0910 09:30:22.351] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0910 09:30:22.456] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0910 09:30:22.630] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:30:22.842] (Bpod/env-test-pod created
W0910 09:30:22.943] I0910 09:30:21.764571   49861 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0910 09:30:22.943] error: min-available and max-unavailable cannot be both specified
I0910 09:30:23.081] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0910 09:30:23.081] Name:         env-test-pod
I0910 09:30:23.082] Namespace:    test-kubectl-describe-pod
I0910 09:30:23.082] Priority:     0
I0910 09:30:23.082] Node:         <none>
I0910 09:30:23.082] Labels:       <none>
... skipping 174 lines ...
I0910 09:30:38.854] (Bpod/valid-pod patched
I0910 09:30:38.968] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0910 09:30:39.066] (Bpod/valid-pod patched
I0910 09:30:39.190] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0910 09:30:39.397] (Bpod/valid-pod patched
I0910 09:30:39.509] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0910 09:30:39.707] (B+++ [0910 09:30:39] "kubectl patch with resourceVersion 508" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0910 09:30:40.004] pod "valid-pod" deleted
I0910 09:30:40.019] pod/valid-pod replaced
I0910 09:30:40.138] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0910 09:30:40.317] (BSuccessful
I0910 09:30:40.317] message:error: --grace-period must have --force specified
I0910 09:30:40.318] has:\-\-grace-period must have \-\-force specified
I0910 09:30:40.498] Successful
I0910 09:30:40.498] message:error: --timeout must have --force specified
I0910 09:30:40.498] has:\-\-timeout must have \-\-force specified
I0910 09:30:40.691] node/node-v1-test created
W0910 09:30:40.792] W0910 09:30:40.690478   53412 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0910 09:30:40.898] node/node-v1-test replaced
I0910 09:30:41.009] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0910 09:30:41.105] (Bnode "node-v1-test" deleted
I0910 09:30:41.216] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0910 09:30:41.525] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0910 09:30:42.670] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 25 lines ...
I0910 09:30:42.963]     name: kubernetes-pause
I0910 09:30:42.963] has:localonlyvalue
I0910 09:30:42.985] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0910 09:30:43.195] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0910 09:30:43.301] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0910 09:30:43.393] (Bpod/valid-pod labeled
W0910 09:30:43.494] error: 'name' already has a value (valid-pod), and --overwrite is false
I0910 09:30:43.595] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0910 09:30:43.606] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:30:43.706] (Bpod "valid-pod" force deleted
W0910 09:30:43.808] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0910 09:30:43.908] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:30:43.909] (B+++ [0910 09:30:43] Creating namespace namespace-1568107843-16694
... skipping 82 lines ...
I0910 09:30:51.584] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0910 09:30:51.588] +++ working dir: /go/src/k8s.io/kubernetes
I0910 09:30:51.591] +++ command: run_kubectl_create_error_tests
I0910 09:30:51.603] +++ [0910 09:30:51] Creating namespace namespace-1568107851-2688
I0910 09:30:51.690] namespace/namespace-1568107851-2688 created
I0910 09:30:51.773] Context "test" modified.
I0910 09:30:51.780] +++ [0910 09:30:51] Testing kubectl create with error
W0910 09:30:51.880] Error: must specify one of -f and -k
W0910 09:30:51.881] 
W0910 09:30:51.882] Create a resource from a file or from stdin.
W0910 09:30:51.882] 
W0910 09:30:51.882]  JSON and YAML formats are accepted.
W0910 09:30:51.882] 
W0910 09:30:51.882] Examples:
... skipping 41 lines ...
W0910 09:30:51.888] 
W0910 09:30:51.889] Usage:
W0910 09:30:51.889]   kubectl create -f FILENAME [options]
W0910 09:30:51.889] 
W0910 09:30:51.889] Use "kubectl <command> --help" for more information about a given command.
W0910 09:30:51.889] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0910 09:30:52.043] +++ [0910 09:30:52] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0910 09:30:52.144] kubectl convert is DEPRECATED and will be removed in a future version.
W0910 09:30:52.144] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0910 09:30:52.259] +++ exit code: 0
I0910 09:30:52.299] Recording: run_kubectl_apply_tests
I0910 09:30:52.299] Running command: run_kubectl_apply_tests
I0910 09:30:52.327] 
... skipping 17 lines ...
I0910 09:30:54.230] (Bpod "test-pod" deleted
I0910 09:30:54.482] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0910 09:30:54.804] I0910 09:30:54.803348   49861 client.go:361] parsed scheme: "endpoint"
W0910 09:30:54.804] I0910 09:30:54.803419   49861 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0910 09:30:54.809] I0910 09:30:54.808375   49861 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0910 09:30:54.909] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0910 09:30:55.010] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0910 09:30:55.111] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0910 09:30:55.112] +++ exit code: 0
I0910 09:30:55.112] Recording: run_kubectl_run_tests
I0910 09:30:55.113] Running command: run_kubectl_run_tests
I0910 09:30:55.140] 
I0910 09:30:55.144] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
I0910 09:30:58.039] Context "test" modified.
I0910 09:30:58.047] +++ [0910 09:30:58] Testing kubectl create filter
I0910 09:30:58.151] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:30:58.407] (Bpod/selector-test-pod created
I0910 09:30:58.529] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0910 09:30:58.641] (BSuccessful
I0910 09:30:58.641] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0910 09:30:58.641] has:pods "selector-test-pod-dont-apply" not found
I0910 09:30:58.743] pod "selector-test-pod" deleted
I0910 09:30:58.767] +++ exit code: 0
I0910 09:30:58.810] Recording: run_kubectl_apply_deployments_tests
I0910 09:30:58.810] Running command: run_kubectl_apply_deployments_tests
I0910 09:30:58.839] 
... skipping 29 lines ...
W0910 09:31:01.511] I0910 09:31:01.414341   53412 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568107858-2980", Name:"nginx", UID:"54dc82cf-65d5-4f10-8351-aa78dd5cf6c5", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0910 09:31:01.511] I0910 09:31:01.418876   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107858-2980", Name:"nginx-8484dd655", UID:"4c4fe5db-7381-41e2-aede-7333eff17163", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-sx4bf
W0910 09:31:01.512] I0910 09:31:01.422877   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107858-2980", Name:"nginx-8484dd655", UID:"4c4fe5db-7381-41e2-aede-7333eff17163", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-vm4gn
W0910 09:31:01.512] I0910 09:31:01.423191   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107858-2980", Name:"nginx-8484dd655", UID:"4c4fe5db-7381-41e2-aede-7333eff17163", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-6fxmx
I0910 09:31:01.613] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0910 09:31:05.779] (BSuccessful
I0910 09:31:05.779] message:Error from server (Conflict): error when applying patch:
I0910 09:31:05.780] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568107858-2980\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0910 09:31:05.780] to:
I0910 09:31:05.780] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0910 09:31:05.781] Name: "nginx", Namespace: "namespace-1568107858-2980"
I0910 09:31:05.783] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568107858-2980\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-10T09:31:01Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568107858-2980" "resourceVersion":"604" "selfLink":"/apis/apps/v1/namespaces/namespace-1568107858-2980/deployments/nginx" "uid":"54dc82cf-65d5-4f10-8351-aa78dd5cf6c5"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-10T09:31:01Z" "lastUpdateTime":"2019-09-10T09:31:01Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-10T09:31:01Z" "lastUpdateTime":"2019-09-10T09:31:01Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0910 09:31:05.783] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0910 09:31:05.783] has:Error from server (Conflict)
W0910 09:31:05.925] I0910 09:31:05.924864   53412 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568107848-10061
I0910 09:31:11.111] deployment.apps/nginx configured
W0910 09:31:11.212] I0910 09:31:11.115898   53412 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568107858-2980", Name:"nginx", UID:"2a616da7-3327-447f-8cb3-f69974dcd982", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0910 09:31:11.213] I0910 09:31:11.120430   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107858-2980", Name:"nginx-668b6c7744", UID:"67903bb0-22b5-49e4-9cce-8a8d280c8fc3", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-bl7ld
W0910 09:31:11.213] I0910 09:31:11.124435   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107858-2980", Name:"nginx-668b6c7744", UID:"67903bb0-22b5-49e4-9cce-8a8d280c8fc3", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-6pnz6
W0910 09:31:11.214] I0910 09:31:11.124793   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107858-2980", Name:"nginx-668b6c7744", UID:"67903bb0-22b5-49e4-9cce-8a8d280c8fc3", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-6sb4v
... skipping 142 lines ...
I0910 09:31:18.763] +++ [0910 09:31:18] Creating namespace namespace-1568107878-6422
I0910 09:31:18.852] namespace/namespace-1568107878-6422 created
I0910 09:31:18.933] Context "test" modified.
I0910 09:31:18.939] +++ [0910 09:31:18] Testing kubectl get
I0910 09:31:19.039] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:19.135] (BSuccessful
I0910 09:31:19.136] message:Error from server (NotFound): pods "abc" not found
I0910 09:31:19.136] has:pods "abc" not found
I0910 09:31:19.238] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:19.334] (BSuccessful
I0910 09:31:19.335] message:Error from server (NotFound): pods "abc" not found
I0910 09:31:19.335] has:pods "abc" not found
I0910 09:31:19.430] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:19.522] (BSuccessful
I0910 09:31:19.522] message:{
I0910 09:31:19.523]     "apiVersion": "v1",
I0910 09:31:19.523]     "items": [],
... skipping 23 lines ...
I0910 09:31:19.901] has not:No resources found
I0910 09:31:19.999] Successful
I0910 09:31:20.000] message:NAME
I0910 09:31:20.000] has not:No resources found
I0910 09:31:20.099] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:20.219] (BSuccessful
I0910 09:31:20.220] message:error: the server doesn't have a resource type "foobar"
I0910 09:31:20.221] has not:No resources found
I0910 09:31:20.326] Successful
I0910 09:31:20.327] message:No resources found in namespace-1568107878-6422 namespace.
I0910 09:31:20.327] has:No resources found
I0910 09:31:20.426] Successful
I0910 09:31:20.427] message:
I0910 09:31:20.427] has not:No resources found
I0910 09:31:20.526] Successful
I0910 09:31:20.526] message:No resources found in namespace-1568107878-6422 namespace.
I0910 09:31:20.527] has:No resources found
I0910 09:31:20.627] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:20.725] (BSuccessful
I0910 09:31:20.726] message:Error from server (NotFound): pods "abc" not found
I0910 09:31:20.726] has:pods "abc" not found
I0910 09:31:20.727] FAIL!
I0910 09:31:20.728] message:Error from server (NotFound): pods "abc" not found
I0910 09:31:20.728] has not:List
I0910 09:31:20.728] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0910 09:31:20.857] Successful
I0910 09:31:20.858] message:I0910 09:31:20.797371   63415 loader.go:375] Config loaded from file:  /tmp/tmp.MMB1Bu2U59/.kube/config
I0910 09:31:20.858] I0910 09:31:20.799207   63415 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0910 09:31:20.859] I0910 09:31:20.822009   63415 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0910 09:31:26.595] Successful
I0910 09:31:26.596] message:NAME    DATA   AGE
I0910 09:31:26.596] one     0      0s
I0910 09:31:26.596] three   0      0s
I0910 09:31:26.596] two     0      0s
I0910 09:31:26.596] STATUS    REASON          MESSAGE
I0910 09:31:26.597] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0910 09:31:26.597] has not:watch is only supported on individual resources
I0910 09:31:27.714] Successful
I0910 09:31:27.714] message:STATUS    REASON          MESSAGE
I0910 09:31:27.715] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0910 09:31:27.715] has not:watch is only supported on individual resources
I0910 09:31:27.721] +++ [0910 09:31:27] Creating namespace namespace-1568107887-12848
I0910 09:31:27.811] namespace/namespace-1568107887-12848 created
I0910 09:31:27.906] Context "test" modified.
I0910 09:31:28.038] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:28.236] (Bpod/valid-pod created
... skipping 56 lines ...
I0910 09:31:28.343] }
I0910 09:31:28.450] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0910 09:31:28.743] (B<no value>Successful
I0910 09:31:28.744] message:valid-pod:
I0910 09:31:28.744] has:valid-pod:
I0910 09:31:28.870] Successful
I0910 09:31:28.871] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0910 09:31:28.871] 	template was:
I0910 09:31:28.872] 		{.missing}
I0910 09:31:28.872] 	object given to jsonpath engine was:
I0910 09:31:28.873] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-10T09:31:28Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568107887-12848", "resourceVersion":"704", "selfLink":"/api/v1/namespaces/namespace-1568107887-12848/pods/valid-pod", "uid":"323656c4-7eec-4cad-a356-205b8932739a"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0910 09:31:28.873] has:missing is not found
W0910 09:31:28.980] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0910 09:31:29.081] Successful
I0910 09:31:29.082] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0910 09:31:29.082] 	template was:
I0910 09:31:29.083] 		{{.missing}}
I0910 09:31:29.083] 	raw data was:
I0910 09:31:29.085] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-10T09:31:28Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568107887-12848","resourceVersion":"704","selfLink":"/api/v1/namespaces/namespace-1568107887-12848/pods/valid-pod","uid":"323656c4-7eec-4cad-a356-205b8932739a"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0910 09:31:29.085] 	object given to template engine was:
I0910 09:31:29.086] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-10T09:31:28Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568107887-12848 resourceVersion:704 selfLink:/api/v1/namespaces/namespace-1568107887-12848/pods/valid-pod uid:323656c4-7eec-4cad-a356-205b8932739a] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0910 09:31:29.087] has:map has no entry for key "missing"
I0910 09:31:30.099] Successful
I0910 09:31:30.099] message:NAME        READY   STATUS    RESTARTS   AGE
I0910 09:31:30.099] valid-pod   0/1     Pending   0          1s
I0910 09:31:30.099] STATUS      REASON          MESSAGE
I0910 09:31:30.100] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0910 09:31:30.100] has:STATUS
I0910 09:31:30.101] Successful
I0910 09:31:30.102] message:NAME        READY   STATUS    RESTARTS   AGE
I0910 09:31:30.102] valid-pod   0/1     Pending   0          1s
I0910 09:31:30.103] STATUS      REASON          MESSAGE
I0910 09:31:30.103] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0910 09:31:30.103] has:valid-pod
I0910 09:31:31.196] Successful
I0910 09:31:31.197] message:pod/valid-pod
I0910 09:31:31.197] has not:STATUS
I0910 09:31:31.199] Successful
I0910 09:31:31.199] message:pod/valid-pod
... skipping 72 lines ...
I0910 09:31:32.313] status:
I0910 09:31:32.313]   phase: Pending
I0910 09:31:32.314]   qosClass: Guaranteed
I0910 09:31:32.314] ---
I0910 09:31:32.314] has:name: valid-pod
I0910 09:31:32.402] Successful
I0910 09:31:32.402] message:Error from server (NotFound): pods "invalid-pod" not found
I0910 09:31:32.403] has:"invalid-pod" not found
I0910 09:31:32.491] pod "valid-pod" deleted
I0910 09:31:32.598] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:31:32.776] (Bpod/redis-master created
I0910 09:31:32.782] pod/valid-pod created
I0910 09:31:32.886] Successful
... skipping 35 lines ...
I0910 09:31:34.333] +++ command: run_kubectl_exec_pod_tests
I0910 09:31:34.347] +++ [0910 09:31:34] Creating namespace namespace-1568107894-13005
I0910 09:31:34.441] namespace/namespace-1568107894-13005 created
I0910 09:31:34.529] Context "test" modified.
I0910 09:31:34.536] +++ [0910 09:31:34] Testing kubectl exec POD COMMAND
I0910 09:31:34.636] Successful
I0910 09:31:34.636] message:Error from server (NotFound): pods "abc" not found
I0910 09:31:34.637] has:pods "abc" not found
I0910 09:31:34.843] pod/test-pod created
I0910 09:31:34.983] Successful
I0910 09:31:34.984] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0910 09:31:34.984] has not:pods "test-pod" not found
I0910 09:31:34.986] Successful
I0910 09:31:34.987] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0910 09:31:34.987] has not:pod or type/name must be specified
I0910 09:31:35.089] pod "test-pod" deleted
I0910 09:31:35.112] +++ exit code: 0
I0910 09:31:35.160] Recording: run_kubectl_exec_resource_name_tests
I0910 09:31:35.161] Running command: run_kubectl_exec_resource_name_tests
I0910 09:31:35.190] 
... skipping 2 lines ...
I0910 09:31:35.200] +++ command: run_kubectl_exec_resource_name_tests
I0910 09:31:35.215] +++ [0910 09:31:35] Creating namespace namespace-1568107895-19139
I0910 09:31:35.317] namespace/namespace-1568107895-19139 created
I0910 09:31:35.416] Context "test" modified.
I0910 09:31:35.425] +++ [0910 09:31:35] Testing kubectl exec TYPE/NAME COMMAND
I0910 09:31:35.554] Successful
I0910 09:31:35.555] message:error: the server doesn't have a resource type "foo"
I0910 09:31:35.555] has:error:
I0910 09:31:35.681] Successful
I0910 09:31:35.682] message:Error from server (NotFound): deployments.apps "bar" not found
I0910 09:31:35.682] has:"bar" not found
I0910 09:31:35.905] pod/test-pod created
I0910 09:31:36.110] replicaset.apps/frontend created
W0910 09:31:36.211] I0910 09:31:36.115356   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107895-19139", Name:"frontend", UID:"7c1b78f8-9f42-4c2d-86bb-5dcbb048f471", APIVersion:"apps/v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bsczg
W0910 09:31:36.212] I0910 09:31:36.120201   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107895-19139", Name:"frontend", UID:"7c1b78f8-9f42-4c2d-86bb-5dcbb048f471", APIVersion:"apps/v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wrn7l
W0910 09:31:36.212] I0910 09:31:36.120266   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107895-19139", Name:"frontend", UID:"7c1b78f8-9f42-4c2d-86bb-5dcbb048f471", APIVersion:"apps/v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s82nz
I0910 09:31:36.313] configmap/test-set-env-config created
I0910 09:31:36.425] Successful
I0910 09:31:36.425] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0910 09:31:36.425] has:not implemented
I0910 09:31:36.533] Successful
I0910 09:31:36.534] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0910 09:31:36.534] has not:not found
I0910 09:31:36.536] Successful
I0910 09:31:36.537] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0910 09:31:36.537] has not:pod or type/name must be specified
I0910 09:31:36.659] Successful
I0910 09:31:36.660] message:Error from server (BadRequest): pod frontend-bsczg does not have a host assigned
I0910 09:31:36.660] has not:not found
I0910 09:31:36.663] Successful
I0910 09:31:36.663] message:Error from server (BadRequest): pod frontend-bsczg does not have a host assigned
I0910 09:31:36.664] has not:pod or type/name must be specified
I0910 09:31:36.753] pod "test-pod" deleted
I0910 09:31:36.845] replicaset.apps "frontend" deleted
I0910 09:31:36.936] configmap "test-set-env-config" deleted
I0910 09:31:36.958] +++ exit code: 0
I0910 09:31:36.998] Recording: run_create_secret_tests
I0910 09:31:36.999] Running command: run_create_secret_tests
I0910 09:31:37.027] 
I0910 09:31:37.030] +++ Running case: test-cmd.run_create_secret_tests 
I0910 09:31:37.033] +++ working dir: /go/src/k8s.io/kubernetes
I0910 09:31:37.037] +++ command: run_create_secret_tests
I0910 09:31:37.143] Successful
I0910 09:31:37.144] message:Error from server (NotFound): secrets "mysecret" not found
I0910 09:31:37.144] has:secrets "mysecret" not found
I0910 09:31:37.330] Successful
I0910 09:31:37.330] message:Error from server (NotFound): secrets "mysecret" not found
I0910 09:31:37.330] has:secrets "mysecret" not found
I0910 09:31:37.332] Successful
I0910 09:31:37.333] message:user-specified
I0910 09:31:37.333] has:user-specified
I0910 09:31:37.414] Successful
I0910 09:31:37.500] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"65a0e402-e4af-4be2-94ee-ae16b4a40c78","resourceVersion":"778","creationTimestamp":"2019-09-10T09:31:37Z"}}
... skipping 2 lines ...
I0910 09:31:37.703] has:uid
I0910 09:31:37.801] Successful
I0910 09:31:37.802] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"65a0e402-e4af-4be2-94ee-ae16b4a40c78","resourceVersion":"779","creationTimestamp":"2019-09-10T09:31:37Z"},"data":{"key1":"config1"}}
I0910 09:31:37.802] has:config1
I0910 09:31:37.896] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"65a0e402-e4af-4be2-94ee-ae16b4a40c78"}}
I0910 09:31:38.022] Successful
I0910 09:31:38.023] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0910 09:31:38.023] has:configmaps "tester-update-cm" not found
I0910 09:31:38.041] +++ exit code: 0
I0910 09:31:38.090] Recording: run_kubectl_create_kustomization_directory_tests
I0910 09:31:38.090] Running command: run_kubectl_create_kustomization_directory_tests
I0910 09:31:38.121] 
I0910 09:31:38.124] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I0910 09:31:41.308] valid-pod   0/1     Pending   0          1s
I0910 09:31:41.308] has:valid-pod
I0910 09:31:42.409] Successful
I0910 09:31:42.409] message:NAME        READY   STATUS    RESTARTS   AGE
I0910 09:31:42.410] valid-pod   0/1     Pending   0          1s
I0910 09:31:42.410] STATUS      REASON          MESSAGE
I0910 09:31:42.410] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0910 09:31:42.410] has:Timeout exceeded while reading body
I0910 09:31:42.506] Successful
I0910 09:31:42.507] message:NAME        READY   STATUS    RESTARTS   AGE
I0910 09:31:42.507] valid-pod   0/1     Pending   0          2s
I0910 09:31:42.507] has:valid-pod
I0910 09:31:42.591] Successful
I0910 09:31:42.592] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0910 09:31:42.592] has:Invalid timeout value
I0910 09:31:42.708] pod "valid-pod" deleted
I0910 09:31:42.734] +++ exit code: 0
I0910 09:31:42.776] Recording: run_crd_tests
I0910 09:31:42.776] Running command: run_crd_tests
I0910 09:31:42.805] 
... skipping 158 lines ...
I0910 09:31:48.606] foo.company.com/test patched
I0910 09:31:48.737] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0910 09:31:48.847] (Bfoo.company.com/test patched
I0910 09:31:48.965] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0910 09:31:49.077] (Bfoo.company.com/test patched
I0910 09:31:49.190] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0910 09:31:49.376] (B+++ [0910 09:31:49] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0910 09:31:49.454] {
I0910 09:31:49.454]     "apiVersion": "company.com/v1",
I0910 09:31:49.455]     "kind": "Foo",
I0910 09:31:49.455]     "metadata": {
I0910 09:31:49.455]         "annotations": {
I0910 09:31:49.455]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 190 lines ...
I0910 09:32:02.182] (Bnamespace/non-native-resources created
I0910 09:32:02.379] bar.company.com/test created
I0910 09:32:02.508] crd.sh:455: Successful get bars {{len .items}}: 1
I0910 09:32:02.607] (Bnamespace "non-native-resources" deleted
I0910 09:32:07.882] crd.sh:458: Successful get bars {{len .items}}: 0
I0910 09:32:08.091] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0910 09:32:08.192] Error from server (NotFound): namespaces "non-native-resources" not found
I0910 09:32:08.293] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0910 09:32:08.334] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0910 09:32:08.453] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0910 09:32:08.490] +++ exit code: 0
I0910 09:32:08.530] Recording: run_cmd_with_img_tests
I0910 09:32:08.531] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0910 09:32:08.894] I0910 09:32:08.893331   53412 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568107928-29775", Name:"test1-6cdffdb5b8", UID:"59eff4db-dcf1-4b08-bc20-5b6cf10107e6", APIVersion:"apps/v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-ttlh7
I0910 09:32:08.995] Successful
I0910 09:32:08.995] message:deployment.apps/test1 created
I0910 09:32:08.995] has:deployment.apps/test1 created
I0910 09:32:08.996] deployment.apps "test1" deleted
I0910 09:32:09.085] Successful
I0910 09:32:09.086] message:error: Invalid image name "InvalidImageName": invalid reference format
I0910 09:32:09.087] has:error: Invalid image name "InvalidImageName": invalid reference format
I0910 09:32:09.103] +++ exit code: 0
I0910 09:32:09.152] +++ [0910 09:32:09] Testing recursive resources
I0910 09:32:09.158] +++ [0910 09:32:09] Creating namespace namespace-1568107929-17888
I0910 09:32:09.248] namespace/namespace-1568107929-17888 created
W0910 09:32:09.349] W0910 09:32:09.102641   49861 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0910 09:32:09.351] E0910 09:32:09.104879   53412 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:09.351] W0910 09:32:09.224072   49861 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0910 09:32:09.352] E0910 09:32:09.226252   53412 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:09.353] W0910 09:32:09.342334   49861 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0910 09:32:09.353] E0910 09:32:09.345406   53412 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0910 09:32:09.454] Context "test" modified.
I0910 09:32:09.502] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:32:09.862] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0910 09:32:09.866] (BSuccessful
I0910 09:32:09.866] message:pod/busybox0 created
I0910 09:32:09.867] pod/busybox1 created
I0910 09:32:09.867] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0910 09:32:09.867] has:error validating data: kind not set
I0910 09:32:09.971] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0910 09:32:10.191] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0910 09:32:10.194] (BSuccessful
I0910 09:32:10.196] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0910 09:32:10.196] has:Object 'Kind' is missing
W0910 09:32:10.297] W0910 09:32:09.464975   49861 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0910 09:32:10.297] E0910 09:32:09.467988   53412 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:10.298] E0910 09:32:10.107103   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:10.298] E0910 09:32:10.228500   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:10.348] E0910 09:32:10.347659   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0910 09:32:10.449] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0910 09:32:10.714] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0910 09:32:10.717] (BSuccessful
I0910 09:32:10.718] message:pod/busybox0 replaced
I0910 09:32:10.718] pod/busybox1 replaced
I0910 09:32:10.718] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0910 09:32:10.718] has:error validating data: kind not set
W0910 09:32:10.819] E0910 09:32:10.470353   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0910 09:32:10.920] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0910 09:32:10.991] (BSuccessful
I0910 09:32:10.992] message:Name:         busybox0
I0910 09:32:10.992] Namespace:    namespace-1568107929-17888
I0910 09:32:10.993] Priority:     0
I0910 09:32:10.993] Node:         <none>
... skipping 159 lines ...
I0910 09:32:11.027] has:Object 'Kind' is missing
I0910 09:32:11.125] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0910 09:32:11.385] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0910 09:32:11.388] (BSuccessful
I0910 09:32:11.389] message:pod/busybox0 annotated
I0910 09:32:11.389] pod/busybox1 annotated
I0910 09:32:11.389] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0910 09:32:11.389] has:Object 'Kind' is missing
W0910 09:32:11.490] E0910 09:32:11.109075   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:11.491] E0910 09:32:11.230223   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:11.491] E0910 09:32:11.349884   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:11.491] E0910 09:32:11.473226   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0910 09:32:11.592] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0910 09:32:11.862] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0910 09:32:11.866] (BSuccessful
I0910 09:32:11.866] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0910 09:32:11.867] pod/busybox0 configured
I0910 09:32:11.867] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0910 09:32:11.867] pod/busybox1 configured
I0910 09:32:11.867] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0910 09:32:11.868] has:error validating data: kind not set
I0910 09:32:11.976] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0910 09:32:12.169] (Bdeployment.apps/nginx created
W0910 09:32:12.270] E0910 09:32:12.111377   53412 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0910 09:32:12.271] I0910 09:32:12.173648   53412 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568107929-17888", Name:"nginx"