This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdamemi: Fix preemption race conditions on heavily utilized nodes for e2e tests
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-11 18:07
Elapsed28m25s
Revision
Buildergke-prow-ssd-pool-1a225945-m2ml
Refs master:001f2cd2
82350:294faa3b
poddbfd970d-d4be-11e9-a582-8a06e185f399
infra-commit069bf1fee
poddbfd970d-d4be-11e9-a582-8a06e185f399
repok8s.io/kubernetes
repo-commit025c594c34718cf9b86d5581b70fb1f7dbafff34
repos{u'k8s.io/kubernetes': u'master:001f2cd2b553d06028c8542c8817820ee05d657f,82350:294faa3bbef2151e56afdff87c4474a3d6a9b93b'}

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeBinding 1m6s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeBinding$
=== RUN   TestVolumeBinding
W0911 18:31:16.988723  110822 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
W0911 18:31:16.989616  110822 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0911 18:31:16.989645  110822 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0911 18:31:16.989662  110822 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0911 18:31:16.989679  110822 master.go:259] Using reconciler: 
I0911 18:31:16.992179  110822 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:16.993086  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:16.993252  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:16.997576  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:16.997619  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.001656  110822 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0911 18:31:17.001784  110822 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0911 18:31:17.001951  110822 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.003207  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.003331  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.003681  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.006736  110822 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 18:31:17.006831  110822 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 18:31:17.007038  110822 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.007577  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.008205  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.008032  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.009229  110822 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0911 18:31:17.009283  110822 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0911 18:31:17.009406  110822 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.010349  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.010366  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.010622  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.011874  110822 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0911 18:31:17.012038  110822 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0911 18:31:17.012735  110822 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.013021  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.013264  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.013079  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.014732  110822 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0911 18:31:17.014780  110822 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0911 18:31:17.014997  110822 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.016696  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.016744  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.020139  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.022228  110822 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0911 18:31:17.022330  110822 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0911 18:31:17.022734  110822 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.023010  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.023118  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.023206  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.024243  110822 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0911 18:31:17.024329  110822 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0911 18:31:17.024428  110822 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.024711  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.024727  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.025288  110822 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0911 18:31:17.025390  110822 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0911 18:31:17.025472  110822 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.025945  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.025974  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.026031  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.027225  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.027422  110822 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0911 18:31:17.027489  110822 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0911 18:31:17.027689  110822 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.027800  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.027818  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.028866  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.029417  110822 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0911 18:31:17.029686  110822 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0911 18:31:17.029767  110822 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.029942  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.029962  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.030838  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.031978  110822 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0911 18:31:17.032037  110822 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0911 18:31:17.032859  110822 watch_cache.go:405] Replace watchCache (rev: 31989) 
I0911 18:31:17.035743  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.035972  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.035995  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.037275  110822 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0911 18:31:17.037432  110822 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0911 18:31:17.037463  110822 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.037581  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.037596  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.038422  110822 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0911 18:31:17.038650  110822 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0911 18:31:17.039068  110822 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.039227  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.039248  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.039643  110822 watch_cache.go:405] Replace watchCache (rev: 31990) 
I0911 18:31:17.039693  110822 watch_cache.go:405] Replace watchCache (rev: 31990) 
I0911 18:31:17.040060  110822 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0911 18:31:17.040088  110822 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.040130  110822 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0911 18:31:17.040266  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.040290  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.041610  110822 watch_cache.go:405] Replace watchCache (rev: 31990) 
I0911 18:31:17.042473  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.042531  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.045770  110822 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.046548  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.046606  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.047920  110822 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0911 18:31:17.048012  110822 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0911 18:31:17.047968  110822 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0911 18:31:17.048474  110822 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.048816  110822 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.049388  110822 watch_cache.go:405] Replace watchCache (rev: 31990) 
I0911 18:31:17.049461  110822 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.050174  110822 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.051085  110822 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.051625  110822 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.051931  110822 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.052029  110822 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.052226  110822 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.052970  110822 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.053571  110822 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.053893  110822 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.054695  110822 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.055048  110822 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.055674  110822 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.055999  110822 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.057004  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.057276  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.057487  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.057731  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.057950  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.058157  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.058429  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.059434  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.059858  110822 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.060611  110822 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.061659  110822 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.062081  110822 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.062372  110822 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.063044  110822 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.063678  110822 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.064399  110822 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.065348  110822 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.066216  110822 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.066984  110822 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.067293  110822 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.067517  110822 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0911 18:31:17.067619  110822 master.go:461] Enabling API group "authentication.k8s.io".
I0911 18:31:17.067697  110822 master.go:461] Enabling API group "authorization.k8s.io".
I0911 18:31:17.067906  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.068168  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.068248  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.069679  110822 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 18:31:17.069786  110822 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 18:31:17.070315  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.071700  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.071866  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.071368  110822 watch_cache.go:405] Replace watchCache (rev: 31990) 
I0911 18:31:17.076021  110822 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 18:31:17.076674  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.076823  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.076849  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.077306  110822 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 18:31:17.078237  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.078561  110822 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 18:31:17.078597  110822 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 18:31:17.078682  110822 master.go:461] Enabling API group "autoscaling".
I0911 18:31:17.078919  110822 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.079186  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.079387  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.080369  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.080575  110822 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0911 18:31:17.080626  110822 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0911 18:31:17.081570  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.082022  110822 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.082412  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.082697  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.083864  110822 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0911 18:31:17.084019  110822 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0911 18:31:17.084049  110822 master.go:461] Enabling API group "batch".
I0911 18:31:17.084427  110822 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.084599  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.084628  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.085676  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.085999  110822 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0911 18:31:17.086027  110822 master.go:461] Enabling API group "certificates.k8s.io".
I0911 18:31:17.086141  110822 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0911 18:31:17.086560  110822 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.087506  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.088299  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.088395  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.089275  110822 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 18:31:17.089442  110822 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.089466  110822 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 18:31:17.089572  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.089752  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.090298  110822 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 18:31:17.090319  110822 master.go:461] Enabling API group "coordination.k8s.io".
I0911 18:31:17.090334  110822 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0911 18:31:17.090549  110822 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.090733  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.090771  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.090865  110822 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 18:31:17.091139  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.091567  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.092138  110822 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 18:31:17.092164  110822 master.go:461] Enabling API group "extensions".
I0911 18:31:17.092310  110822 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.092441  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.092461  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.093011  110822 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 18:31:17.094280  110822 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0911 18:31:17.094478  110822 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.094689  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.094717  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.094730  110822 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0911 18:31:17.095337  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.095945  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.096099  110822 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 18:31:17.096206  110822 master.go:461] Enabling API group "networking.k8s.io".
I0911 18:31:17.096381  110822 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 18:31:17.096772  110822 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.097680  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.098045  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.098238  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.099441  110822 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0911 18:31:17.099467  110822 master.go:461] Enabling API group "node.k8s.io".
I0911 18:31:17.099521  110822 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0911 18:31:17.100330  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.100702  110822 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.100807  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.101730  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.108621  110822 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0911 18:31:17.109007  110822 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.111237  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.111293  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.111777  110822 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0911 18:31:17.115274  110822 watch_cache.go:405] Replace watchCache (rev: 31991) 
I0911 18:31:17.123006  110822 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0911 18:31:17.123172  110822 master.go:461] Enabling API group "policy".
I0911 18:31:17.123057  110822 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0911 18:31:17.123630  110822 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.124860  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.125023  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.126169  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.126979  110822 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 18:31:17.127085  110822 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 18:31:17.127181  110822 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.127358  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.127392  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.128362  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.128798  110822 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 18:31:17.128930  110822 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 18:31:17.128914  110822 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.129107  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.129161  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.130066  110822 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 18:31:17.130203  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.130439  110822 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 18:31:17.130445  110822 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.130773  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.130885  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.131305  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.131612  110822 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 18:31:17.131724  110822 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 18:31:17.131730  110822 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.131987  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.132023  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.133097  110822 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 18:31:17.133233  110822 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 18:31:17.133562  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.133720  110822 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.133923  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.133993  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.135344  110822 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 18:31:17.135373  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.135463  110822 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 18:31:17.135685  110822 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.136109  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.136207  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.136576  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.136899  110822 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 18:31:17.136976  110822 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 18:31:17.137601  110822 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.137799  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.137825  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.138108  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.138374  110822 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 18:31:17.138413  110822 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0911 18:31:17.138458  110822 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 18:31:17.139612  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.140624  110822 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.140867  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.140897  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.141601  110822 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 18:31:17.141701  110822 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 18:31:17.142045  110822 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.142257  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.142285  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.142607  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.143594  110822 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 18:31:17.143662  110822 master.go:461] Enabling API group "scheduling.k8s.io".
I0911 18:31:17.143716  110822 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 18:31:17.144461  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.144904  110822 master.go:450] Skipping disabled API group "settings.k8s.io".
I0911 18:31:17.145248  110822 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.145587  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.145702  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.146894  110822 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 18:31:17.146979  110822 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 18:31:17.147522  110822 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.147808  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.147929  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.148624  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.148929  110822 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 18:31:17.149081  110822 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.148980  110822 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 18:31:17.149417  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.149629  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.150227  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.150649  110822 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0911 18:31:17.150778  110822 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.151009  110822 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0911 18:31:17.151133  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.151230  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.151736  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.152670  110822 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0911 18:31:17.152782  110822 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0911 18:31:17.153051  110822 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.153263  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.153386  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.153845  110822 watch_cache.go:405] Replace watchCache (rev: 31994) 
I0911 18:31:17.155175  110822 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 18:31:17.155467  110822 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.155718  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.155828  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.155255  110822 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 18:31:17.156437  110822 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 18:31:17.156600  110822 master.go:461] Enabling API group "storage.k8s.io".
I0911 18:31:17.156644  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.156643  110822 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 18:31:17.157109  110822 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.157325  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.157429  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.157612  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.159206  110822 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0911 18:31:17.159352  110822 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0911 18:31:17.159439  110822 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.159725  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.159750  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.160482  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.160640  110822 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0911 18:31:17.160825  110822 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.160985  110822 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0911 18:31:17.161055  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.161078  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.162211  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.162525  110822 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0911 18:31:17.162611  110822 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0911 18:31:17.162748  110822 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.162870  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.162895  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.163396  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.163650  110822 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0911 18:31:17.163732  110822 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0911 18:31:17.163811  110822 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.163956  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.163976  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.164750  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.164862  110822 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0911 18:31:17.164908  110822 master.go:461] Enabling API group "apps".
I0911 18:31:17.164959  110822 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.165175  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.165202  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.165842  110822 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0911 18:31:17.166121  110822 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 18:31:17.166162  110822 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.166220  110822 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 18:31:17.166627  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.166648  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.166909  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.167517  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.167583  110822 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 18:31:17.167609  110822 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.167791  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.167819  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.167877  110822 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 18:31:17.168622  110822 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 18:31:17.168655  110822 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.168699  110822 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 18:31:17.168817  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.168843  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.169229  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.169772  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.170296  110822 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 18:31:17.170315  110822 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0911 18:31:17.170350  110822 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 18:31:17.170362  110822 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.170837  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.170859  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:17.171059  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.171552  110822 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 18:31:17.171577  110822 master.go:461] Enabling API group "events.k8s.io".
I0911 18:31:17.171591  110822 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 18:31:17.171868  110822 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172086  110822 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172305  110822 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172414  110822 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172541  110822 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172617  110822 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172751  110822 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172850  110822 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.172938  110822 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.173314  110822 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.173432  110822 watch_cache.go:405] Replace watchCache (rev: 31995) 
I0911 18:31:17.174789  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.175043  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.175923  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.176166  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.177180  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.177563  110822 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.178186  110822 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.178458  110822 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.179406  110822 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.179650  110822 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 18:31:17.179701  110822 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0911 18:31:17.180373  110822 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.180583  110822 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.180832  110822 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.181820  110822 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.182581  110822 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.183927  110822 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.184249  110822 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.185429  110822 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.186040  110822 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.186374  110822 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.188527  110822 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 18:31:17.188610  110822 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0911 18:31:17.189453  110822 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.189935  110822 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.190543  110822 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.191080  110822 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.191733  110822 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.192354  110822 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.192876  110822 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.193574  110822 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.194029  110822 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.194616  110822 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.195630  110822 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 18:31:17.195724  110822 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0911 18:31:17.196301  110822 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.196853  110822 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 18:31:17.196919  110822 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0911 18:31:17.197658  110822 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.198322  110822 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.198727  110822 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.199273  110822 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.199853  110822 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.200355  110822 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.200867  110822 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 18:31:17.200940  110822 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0911 18:31:17.201976  110822 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.205360  110822 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.206993  110822 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.209076  110822 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.209797  110822 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.210564  110822 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.212749  110822 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.213533  110822 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.217870  110822 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.222066  110822 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.222374  110822 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.222642  110822 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 18:31:17.222696  110822 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0911 18:31:17.222704  110822 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0911 18:31:17.223416  110822 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.224064  110822 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.225231  110822 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.225900  110822 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.226680  110822 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"da48a647-be65-4f29-98ad-e6c70c881bf1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 18:31:17.244238  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.244273  110822 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0911 18:31:17.244286  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.244300  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.244308  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.244318  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.244354  110822 httplog.go:90] GET /healthz: (415.411µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.245808  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.876387ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:17.248858  110822 httplog.go:90] GET /api/v1/services: (1.172081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:17.256363  110822 httplog.go:90] GET /api/v1/services: (1.216965ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:17.258760  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.258789  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.258952  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.258978  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.258986  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.259227  110822 httplog.go:90] GET /healthz: (396.539µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:17.260864  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.796023ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.261730  110822 httplog.go:90] GET /api/v1/services: (1.659964ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:17.264879  110822 httplog.go:90] POST /api/v1/namespaces: (3.669126ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.266149  110822 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.02045ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.266732  110822 httplog.go:90] GET /api/v1/services: (1.017947ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:17.268355  110822 httplog.go:90] POST /api/v1/namespaces: (1.800736ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.269537  110822 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (746.645µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.271595  110822 httplog.go:90] POST /api/v1/namespaces: (1.723552ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.346084  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.346116  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.346142  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.346148  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.346158  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.346246  110822 httplog.go:90] GET /healthz: (293.571µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.360210  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.360428  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.360547  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.360633  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.360696  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.360875  110822 httplog.go:90] GET /healthz: (793.472µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.446362  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.446668  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.446824  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.446919  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.447003  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.447237  110822 httplog.go:90] GET /healthz: (1.028958ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.460182  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.460222  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.460235  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.460245  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.460254  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.460283  110822 httplog.go:90] GET /healthz: (283.552µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.546311  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.546355  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.546369  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.546380  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.546389  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.546422  110822 httplog.go:90] GET /healthz: (289.952µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.560320  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.560367  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.560381  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.560392  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.560401  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.560436  110822 httplog.go:90] GET /healthz: (265.668µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.646129  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.646167  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.646179  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.646188  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.646196  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.646228  110822 httplog.go:90] GET /healthz: (242.816µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.660426  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.660463  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.660484  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.660491  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.660513  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.660563  110822 httplog.go:90] GET /healthz: (312.005µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.747255  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.747292  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.747304  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.747313  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.747321  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.747366  110822 httplog.go:90] GET /healthz: (241.82µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.760405  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.760442  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.760453  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.760462  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.760470  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.760532  110822 httplog.go:90] GET /healthz: (237.169µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.846350  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.846385  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.846405  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.846414  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.846423  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.846458  110822 httplog.go:90] GET /healthz: (262.283µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.860153  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.860191  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.860203  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.860212  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.860219  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.860313  110822 httplog.go:90] GET /healthz: (302.347µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.946117  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.946150  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.946162  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.946171  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.946179  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.946223  110822 httplog.go:90] GET /healthz: (247.162µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:17.960136  110822 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 18:31:17.960166  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:17.960175  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:17.960181  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:17.960186  110822 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:17.960208  110822 httplog.go:90] GET /healthz: (210.592µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:17.989750  110822 client.go:361] parsed scheme: "endpoint"
I0911 18:31:17.989847  110822 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 18:31:18.047166  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.047193  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:18.047201  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:18.047207  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:18.047271  110822 httplog.go:90] GET /healthz: (1.250327ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:18.061001  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.061034  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:18.061044  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:18.061052  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:18.061102  110822 httplog.go:90] GET /healthz: (1.077034ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.147798  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.147826  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:18.147833  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:18.147840  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:18.147875  110822 httplog.go:90] GET /healthz: (1.81641ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:18.161326  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.161359  110822 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 18:31:18.161370  110822 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 18:31:18.161379  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 18:31:18.161418  110822 httplog.go:90] GET /healthz: (1.325346ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.232111  110822 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.472979ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.232874  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.222697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.233228  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.36209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0911 18:31:18.237095  110822 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.401885ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.237784  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.97182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0911 18:31:18.238069  110822 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (4.382327ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.239553  110822 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0911 18:31:18.242344  110822 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.56524ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.242993  110822 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.833591ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.243627  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (4.313319ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0911 18:31:18.244360  110822 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.597093ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.244724  110822 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0911 18:31:18.245077  110822 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0911 18:31:18.244945  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (991.836µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0911 18:31:18.246426  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (745.856µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.246868  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.246899  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.246934  110822 httplog.go:90] GET /healthz: (1.078144ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:18.247624  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (798.559µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.249146  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (920.496µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.250668  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.026472ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.252298  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.205338ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.253812  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (884.517µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.257667  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.215703ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.257984  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0911 18:31:18.259189  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (992.557µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.260527  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.260566  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.260603  110822 httplog.go:90] GET /healthz: (773.153µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.261326  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.671303ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.261707  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0911 18:31:18.262953  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.034823ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.265351  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.968937ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.265609  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0911 18:31:18.266922  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.090038ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.269797  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.437604ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.270026  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0911 18:31:18.271375  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.158739ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.273642  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.830969ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.273929  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0911 18:31:18.275063  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (941.456µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.277233  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.727971ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.277591  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0911 18:31:18.279520  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.077065ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.282193  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.106529ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.282446  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0911 18:31:18.283548  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (881.357µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.286421  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.483324ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.286664  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0911 18:31:18.288364  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.323326ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.290696  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832003ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.291082  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0911 18:31:18.292328  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (898.982µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.295246  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.289ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.295561  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0911 18:31:18.297116  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (988.841µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.299202  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.627929ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.299399  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0911 18:31:18.300932  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.299514ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.303636  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.137564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.304069  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0911 18:31:18.305062  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (790.991µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.307268  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507399ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.307520  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0911 18:31:18.308754  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (992.977µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.311258  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.966627ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.311834  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0911 18:31:18.313577  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.327469ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.316378  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.201859ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.316724  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0911 18:31:18.317939  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (902.902µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.320064  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.62328ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.320339  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0911 18:31:18.321671  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.08318ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.326486  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.563932ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.326859  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0911 18:31:18.328686  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.610766ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.331557  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719304ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.331831  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 18:31:18.332907  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (860.269µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.334771  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443968ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.335025  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0911 18:31:18.336123  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (917.367µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.338171  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.481079ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.338390  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0911 18:31:18.339806  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.142353ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.342197  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.889498ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.342464  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0911 18:31:18.343794  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.079618ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.345975  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.794539ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.346170  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0911 18:31:18.347262  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.347292  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.347321  110822 httplog.go:90] GET /healthz: (1.158746ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:18.347600  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.141494ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.349946  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.953423ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.350244  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0911 18:31:18.351354  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (871.803µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.353266  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364726ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.353448  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0911 18:31:18.354782  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.007258ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.357379  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.082087ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.357655  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0911 18:31:18.359050  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.18195ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.360769  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.360802  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.360843  110822 httplog.go:90] GET /healthz: (948.148µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.362327  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.429117ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.362671  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0911 18:31:18.363812  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (865.4µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.366531  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.184074ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.366872  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0911 18:31:18.368108  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (895.811µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.370121  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.457908ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.370424  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 18:31:18.371555  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (858.307µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.373937  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.697442ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.374309  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 18:31:18.375459  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (766.987µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.378115  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.852012ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.379034  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 18:31:18.380653  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.197939ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.383057  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738638ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.383523  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 18:31:18.384751  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (955.559µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.387905  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.626641ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.388274  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 18:31:18.389664  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.103685ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.392312  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.016466ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.392602  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 18:31:18.393842  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (997.453µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.396107  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.795121ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.396408  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 18:31:18.397555  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (885.026µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.399863  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.666481ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.400158  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 18:31:18.401265  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (850.029µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.403415  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.658214ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.403664  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 18:31:18.404899  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.039919ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.407312  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.867558ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.407651  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 18:31:18.408622  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (750.337µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.410596  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.496429ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.410888  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0911 18:31:18.412145  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (988.483µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.414071  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.488531ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.414532  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 18:31:18.415428  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (624.578µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.417589  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.462041ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.418062  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0911 18:31:18.419103  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (754.611µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.421002  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307926ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.421261  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 18:31:18.422675  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.049785ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.424862  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.68833ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.425188  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 18:31:18.426479  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.071122ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.428595  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.749926ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.428898  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 18:31:18.430994  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.779718ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.433949  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.04368ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.435756  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 18:31:18.437879  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.836099ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.441812  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.296734ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.442160  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 18:31:18.443584  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.151362ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.445714  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613132ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.446011  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0911 18:31:18.447271  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.447297  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.447363  110822 httplog.go:90] GET /healthz: (1.443826ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:18.447386  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.031313ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.449172  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.291538ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.449526  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 18:31:18.450775  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (935.498µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.452388  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.285001ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.452692  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0911 18:31:18.453871  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (903.613µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.455602  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.289099ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.455924  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 18:31:18.457147  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.050467ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.458987  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.415488ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.459358  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 18:31:18.460644  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.011601ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.460777  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.460797  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.460846  110822 httplog.go:90] GET /healthz: (979.162µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.462652  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.374437ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.463014  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 18:31:18.472266  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.498164ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.493604  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.615938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.494094  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 18:31:18.512487  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.631485ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.534150  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.307098ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:18.534816  110822 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 18:31:18.550181  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.550373  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.551047  110822 httplog.go:90] GET /healthz: (5.001786ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:18.552337  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.51241ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.561784  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.562106  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.562402  110822 httplog.go:90] GET /healthz: (2.261077ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.579071  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.15246ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.579329  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0911 18:31:18.592554  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.660757ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.613211  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.309317ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.613439  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0911 18:31:18.632308  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.339713ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.647645  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.647677  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.647720  110822 httplog.go:90] GET /healthz: (1.278253ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:18.654855  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.091403ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.655176  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0911 18:31:18.660877  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.660904  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.660948  110822 httplog.go:90] GET /healthz: (1.027701ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.672307  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.30664ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.693884  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.021178ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.695365  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0911 18:31:18.712246  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.424996ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.733187  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.309794ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.733674  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0911 18:31:18.746969  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.747007  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.747048  110822 httplog.go:90] GET /healthz: (1.107513ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:18.751861  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.229056ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.761078  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.761128  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.761172  110822 httplog.go:90] GET /healthz: (1.075757ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.773129  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.749682ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.773533  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 18:31:18.793435  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.11387ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.812940  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.2111ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.813210  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0911 18:31:18.831908  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.155661ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.846947  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.846975  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.847008  110822 httplog.go:90] GET /healthz: (1.128537ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:18.852886  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.209731ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.853245  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0911 18:31:18.860935  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.860963  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.861011  110822 httplog.go:90] GET /healthz: (1.067413ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.872170  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.408728ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.894208  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.153705ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.894460  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0911 18:31:18.912547  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.777385ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.933534  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.66734ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.934769  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0911 18:31:18.947186  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.947225  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.947266  110822 httplog.go:90] GET /healthz: (1.28129ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:18.957189  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.330871ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.961000  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:18.961029  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:18.961063  110822 httplog.go:90] GET /healthz: (1.090462ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.973077  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.300367ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:18.973341  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 18:31:18.992458  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.517121ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.013203  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.433745ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.013464  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 18:31:19.031907  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.122425ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.047055  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.047092  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.047129  110822 httplog.go:90] GET /healthz: (1.149696ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.053401  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.647095ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.053662  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 18:31:19.061278  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.061311  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.061345  110822 httplog.go:90] GET /healthz: (1.340354ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.072378  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.629612ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.092950  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.153765ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.093216  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 18:31:19.112134  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.394501ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.133675  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.873652ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.134226  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 18:31:19.154010  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.154043  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.154090  110822 httplog.go:90] GET /healthz: (1.178285ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.154422  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.963604ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:19.161147  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.161177  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.161216  110822 httplog.go:90] GET /healthz: (1.24948ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.172897  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.141835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.173173  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 18:31:19.192270  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.42743ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.213401  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.544015ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.213694  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 18:31:19.233469  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.389825ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.247534  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.247564  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.247625  110822 httplog.go:90] GET /healthz: (1.174138ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.256228  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.487938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.256562  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 18:31:19.262668  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.262699  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.262743  110822 httplog.go:90] GET /healthz: (2.799521ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.278574  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (7.685022ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.297800  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.837095ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.298309  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 18:31:19.313055  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.232496ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.339049  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.819486ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.339365  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 18:31:19.347905  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.347939  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.347981  110822 httplog.go:90] GET /healthz: (1.670482ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.356490  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (4.290022ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.370296  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.370330  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.370375  110822 httplog.go:90] GET /healthz: (1.453843ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.372911  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.081785ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.373433  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0911 18:31:19.396674  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (5.714464ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.414271  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.416492ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.414715  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 18:31:19.433077  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.245044ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.447347  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.447380  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.447422  110822 httplog.go:90] GET /healthz: (1.283302ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.454604  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.684682ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.454866  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0911 18:31:19.460839  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.460864  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.460900  110822 httplog.go:90] GET /healthz: (990.249µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.472141  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.407035ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.494225  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.386323ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.494537  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 18:31:19.513994  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (3.161102ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.533039  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.309238ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.533291  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 18:31:19.546932  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.546962  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.547004  110822 httplog.go:90] GET /healthz: (1.023747ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.552265  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.574446ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.560972  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.561000  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.561037  110822 httplog.go:90] GET /healthz: (1.075019ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.575107  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.325518ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.575867  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 18:31:19.593226  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.760216ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.613282  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.44544ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.613627  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 18:31:19.632789  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.231237ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.648245  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.648292  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.648330  110822 httplog.go:90] GET /healthz: (2.266604ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.653183  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.493663ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.653405  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 18:31:19.665146  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.665177  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.665212  110822 httplog.go:90] GET /healthz: (1.130646ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.681687  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.449003ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.693308  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.47807ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.693569  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0911 18:31:19.712156  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.369311ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.733965  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.619609ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.734226  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 18:31:19.746758  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.746785  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.746821  110822 httplog.go:90] GET /healthz: (872.833µs) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.751748  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.064712ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.761086  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.761128  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.761165  110822 httplog.go:90] GET /healthz: (1.155203ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.773761  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.932907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.774215  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0911 18:31:19.792145  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.344142ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.813217  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.358054ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.813573  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 18:31:19.832368  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.578676ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.848108  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.848153  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.848200  110822 httplog.go:90] GET /healthz: (2.050545ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.852904  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.098912ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.853251  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 18:31:19.860975  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.861012  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.861070  110822 httplog.go:90] GET /healthz: (1.048493ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.871998  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.193302ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.892718  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.906441ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.893210  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 18:31:19.912213  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.390261ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.933438  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.555119ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:19.933839  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 18:31:19.953318  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.953355  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.953391  110822 httplog.go:90] GET /healthz: (7.412087ms) 0 [Go-http-client/1.1 127.0.0.1:41662]
I0911 18:31:19.955056  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (2.293898ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:19.961568  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:19.961608  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:19.961670  110822 httplog.go:90] GET /healthz: (1.344943ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:19.972873  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.072757ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:19.973143  110822 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 18:31:19.992889  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.119509ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:19.994666  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.29429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.018795  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.566961ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.019088  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0911 18:31:20.032388  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.228647ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.034212  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.412067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.047050  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.047080  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.047119  110822 httplog.go:90] GET /healthz: (1.203422ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:20.053122  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.231663ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.053821  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 18:31:20.061139  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.061183  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.061228  110822 httplog.go:90] GET /healthz: (1.182304ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.072538  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.654222ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.074954  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.607657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.093220  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.189473ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.093483  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 18:31:20.112126  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.307657ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.117585  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.935583ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.133887  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.082684ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.134205  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 18:31:20.147229  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.147266  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.147321  110822 httplog.go:90] GET /healthz: (1.288036ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:20.152128  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.328282ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.154009  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.366565ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.160874  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.160906  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.160947  110822 httplog.go:90] GET /healthz: (1.01339ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.173240  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.490383ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.174234  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 18:31:20.192187  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.293243ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.231306  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (38.689287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.238407  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (6.547451ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.241902  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 18:31:20.243666  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.49605ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.245634  110822 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.568893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.247619  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.247642  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.247674  110822 httplog.go:90] GET /healthz: (1.463954ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:20.254153  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.499985ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.254645  110822 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 18:31:20.261451  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.261511  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.261553  110822 httplog.go:90] GET /healthz: (1.637253ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.275540  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.079492ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.278149  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.150972ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.298670  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.803335ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.298972  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0911 18:31:20.318353  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.61015ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.321169  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.299479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.334164  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.006193ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.334470  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 18:31:20.347196  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.347227  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.347266  110822 httplog.go:90] GET /healthz: (1.245197ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:20.354305  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (3.075037ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.356549  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.47075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.360960  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.360991  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.361046  110822 httplog.go:90] GET /healthz: (1.069739ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.373367  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.510122ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.373631  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 18:31:20.392253  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.401637ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.394734  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.947945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.413465  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.53137ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.413808  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 18:31:20.432444  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.351326ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.434148  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.253316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.447130  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.447155  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.447204  110822 httplog.go:90] GET /healthz: (1.221232ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I0911 18:31:20.452986  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.220982ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.453315  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 18:31:20.461935  110822 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 18:31:20.461963  110822 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 18:31:20.462030  110822 httplog.go:90] GET /healthz: (1.181838ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.472165  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.340413ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.474134  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.346603ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.493004  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.187271ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.493395  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 18:31:20.512289  110822 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.430866ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.514654  110822 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.669552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.534337  110822 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.198902ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.534602  110822 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 18:31:20.547408  110822 httplog.go:90] GET /healthz: (1.334949ms) 200 [Go-http-client/1.1 127.0.0.1:41660]
W0911 18:31:20.548252  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548280  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548307  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548318  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548328  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548336  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548352  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548364  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548373  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548424  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:20.548442  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 18:31:20.548528  110822 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0911 18:31:20.548547  110822 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0911 18:31:20.549472  110822 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.549528  110822 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.549773  110822 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.549794  110822 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.549875  110822 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.549886  110822 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550154  110822 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550166  110822 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550377  110822 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550391  110822 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550780  110822 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550799  110822 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550886  110822 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.550901  110822 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.551745  110822 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.551759  110822 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.552125  110822 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.552138  110822 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.552843  110822 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (874.292µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:20.552850  110822 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.552902  110822 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.553147  110822 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (543.807µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0911 18:31:20.553283  110822 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.213731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42024]
I0911 18:31:20.553716  110822 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (472.658µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0911 18:31:20.553793  110822 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (750.467µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I0911 18:31:20.554348  110822 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (526.063µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0911 18:31:20.554968  110822 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (524.299µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0911 18:31:20.555199  110822 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (495.318µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42038]
I0911 18:31:20.556199  110822 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (835.41µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0911 18:31:20.558965  110822 get.go:250] Starting watch for /api/v1/nodes, rv=31989 labels= fields= timeout=5m40s
I0911 18:31:20.559049  110822 get.go:250] Starting watch for /api/v1/services, rv=31990 labels= fields= timeout=9m2s
I0911 18:31:20.559909  110822 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (541.954µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:31:20.562130  110822 httplog.go:90] GET /healthz: (2.279273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0911 18:31:20.562259  110822 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=31991 labels= fields= timeout=8m12s
I0911 18:31:20.562688  110822 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=31989 labels= fields= timeout=9m45s
I0911 18:31:20.563647  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.101077ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0911 18:31:20.566159  110822 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.566188  110822 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0911 18:31:20.566296  110822 httplog.go:90] POST /api/v1/namespaces: (2.308638ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0911 18:31:20.567098  110822 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=31995 labels= fields= timeout=9m42s
I0911 18:31:20.568384  110822 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (1.730327ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0911 18:31:20.569991  110822 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=31989 labels= fields= timeout=7m28s
I0911 18:31:20.570471  110822 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=31994 labels= fields= timeout=5m54s
I0911 18:31:20.572793  110822 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=31990 labels= fields= timeout=9m16s
I0911 18:31:20.573132  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (6.133808ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0911 18:31:20.573239  110822 get.go:250] Starting watch for /api/v1/pods, rv=31990 labels= fields= timeout=8m33s
I0911 18:31:20.573402  110822 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=31995 labels= fields= timeout=6m22s
I0911 18:31:20.578211  110822 httplog.go:90] POST /api/v1/namespaces/default/services: (4.409654ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0911 18:31:20.579456  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (874.842µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0911 18:31:20.583564  110822 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (3.722162ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0911 18:31:20.585275  110822 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=31995 labels= fields= timeout=7m2s
I0911 18:31:20.649475  110822 shared_informer.go:227] caches populated
I0911 18:31:20.749700  110822 shared_informer.go:227] caches populated
I0911 18:31:20.850156  110822 shared_informer.go:227] caches populated
I0911 18:31:20.950573  110822 shared_informer.go:227] caches populated
I0911 18:31:21.050826  110822 shared_informer.go:227] caches populated
I0911 18:31:21.151047  110822 shared_informer.go:227] caches populated
I0911 18:31:21.251308  110822 shared_informer.go:227] caches populated
I0911 18:31:21.351559  110822 shared_informer.go:227] caches populated
I0911 18:31:21.451765  110822 shared_informer.go:227] caches populated
I0911 18:31:21.551913  110822 shared_informer.go:227] caches populated
I0911 18:31:21.652162  110822 shared_informer.go:227] caches populated
I0911 18:31:21.752421  110822 shared_informer.go:227] caches populated
I0911 18:31:21.752700  110822 plugins.go:630] Loaded volume plugin "kubernetes.io/mock-provisioner"
W0911 18:31:21.752736  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:21.752775  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:21.752797  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:21.752816  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 18:31:21.752826  110822 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 18:31:21.752873  110822 pv_controller_base.go:282] Starting persistent volume controller
I0911 18:31:21.752923  110822 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0911 18:31:21.753186  110822 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753205  110822 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753186  110822 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753292  110822 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753304  110822 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753314  110822 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753368  110822 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753381  110822 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753404  110822 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.753420  110822 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 18:31:21.754854  110822 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (950.762µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42200]
I0911 18:31:21.754919  110822 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (462.713µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0911 18:31:21.754866  110822 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (556.194µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42202]
I0911 18:31:21.755044  110822 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (541.851µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0911 18:31:21.755382  110822 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (918.778µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42206]
I0911 18:31:21.755877  110822 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=31989 labels= fields= timeout=5m15s
I0911 18:31:21.755877  110822 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=31995 labels= fields= timeout=8m33s
I0911 18:31:21.756060  110822 get.go:250] Starting watch for /api/v1/pods, rv=31990 labels= fields= timeout=7m22s
I0911 18:31:21.756236  110822 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=31989 labels= fields= timeout=7m2s
I0911 18:31:21.756371  110822 get.go:250] Starting watch for /api/v1/nodes, rv=31989 labels= fields= timeout=6m44s
I0911 18:31:21.853113  110822 shared_informer.go:227] caches populated
I0911 18:31:21.853203  110822 shared_informer.go:227] caches populated
I0911 18:31:21.853213  110822 shared_informer.go:204] Caches are synced for persistent volume 
I0911 18:31:21.853234  110822 pv_controller_base.go:158] controller initialized
I0911 18:31:21.853361  110822 pv_controller_base.go:419] resyncing PV controller
I0911 18:31:21.953365  110822 shared_informer.go:227] caches populated
I0911 18:31:22.053611  110822 shared_informer.go:227] caches populated
I0911 18:31:22.153847  110822 shared_informer.go:227] caches populated
I0911 18:31:22.254101  110822 shared_informer.go:227] caches populated
I0911 18:31:22.260058  110822 node_tree.go:93] Added node "node-1" in group "" to NodeTree
I0911 18:31:22.260929  110822 httplog.go:90] POST /api/v1/nodes: (5.912907ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.264111  110822 httplog.go:90] POST /api/v1/nodes: (1.99071ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.264697  110822 node_tree.go:93] Added node "node-2" in group "" to NodeTree
I0911 18:31:22.267852  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.296039ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.270596  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.649066ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.271030  110822 volume_binding_test.go:195] Running test immediate can bind
I0911 18:31:22.272970  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.666984ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.275882  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.491083ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.281678  110822 httplog.go:90] POST /api/v1/persistentvolumes: (5.271018ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.281899  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-canbind", version 32412
I0911 18:31:22.281970  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I0911 18:31:22.281991  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I0911 18:31:22.282108  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind]: set phase Available
I0911 18:31:22.285039  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.359529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.285217  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32413
I0911 18:31:22.285235  110822 pv_controller.go:798] volume "pv-i-canbind" entered phase "Available"
I0911 18:31:22.285254  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32413
I0911 18:31:22.285265  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I0911 18:31:22.285280  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I0911 18:31:22.285285  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind]: set phase Available
I0911 18:31:22.285290  110822 pv_controller.go:780] updating PersistentVolume[pv-i-canbind]: phase Available already set
I0911 18:31:22.285866  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (3.420127ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.286696  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind", version 32414
I0911 18:31:22.287115  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:22.287349  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I0911 18:31:22.287511  110822 pv_controller.go:931] binding volume "pv-i-canbind" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.287601  110822 pv_controller.go:829] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.287696  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" bound to volume "pv-i-canbind"
I0911 18:31:22.290951  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind: (2.07979ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.290993  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32416
I0911 18:31:22.291028  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.291048  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind
I0911 18:31:22.291064  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:22.291079  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:22.291205  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32416
I0911 18:31:22.291246  110822 pv_controller.go:862] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.291261  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind]: set phase Bound
I0911 18:31:22.294063  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.382223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.294471  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32417
I0911 18:31:22.294536  110822 pv_controller.go:798] volume "pv-i-canbind" entered phase "Bound"
I0911 18:31:22.294813  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: binding to "pv-i-canbind"
I0911 18:31:22.294869  110822 pv_controller.go:901] volume "pv-i-canbind" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.295078  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32417
I0911 18:31:22.295133  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.295148  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind
I0911 18:31:22.295176  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:22.295192  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:22.299743  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind: (4.278548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.300158  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" with version 32420
I0911 18:31:22.300226  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: bound to "pv-i-canbind"
I0911 18:31:22.300241  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind] status: set phase Bound
I0911 18:31:22.303374  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (16.49424ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.303452  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind
I0911 18:31:22.303482  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind
I0911 18:31:22.303804  110822 scheduler_binder.go:646] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind": No matching NodeSelectorTerms
I0911 18:31:22.303821  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind" match with Node "node-1"
I0911 18:31:22.303944  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind", node "node-1"
I0911 18:31:22.303986  110822 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I0911 18:31:22.304051  110822 factory.go:606] Attempting to bind pod-i-canbind to node-1
I0911 18:31:22.304203  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind/status: (3.581304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.304539  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" with version 32421
I0911 18:31:22.304567  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" entered phase "Bound"
I0911 18:31:22.304584  110822 pv_controller.go:957] volume "pv-i-canbind" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.304608  110822 pv_controller.go:958] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.304635  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I0911 18:31:22.304667  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" with version 32421
I0911 18:31:22.304687  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I0911 18:31:22.304712  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.304722  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: claim is already correctly bound
I0911 18:31:22.304732  110822 pv_controller.go:931] binding volume "pv-i-canbind" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.304741  110822 pv_controller.go:829] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.304758  110822 pv_controller.go:841] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.304768  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind]: set phase Bound
I0911 18:31:22.304776  110822 pv_controller.go:780] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I0911 18:31:22.304785  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: binding to "pv-i-canbind"
I0911 18:31:22.304823  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind]: already bound to "pv-i-canbind"
I0911 18:31:22.304835  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind] status: set phase Bound
I0911 18:31:22.304853  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind] status: phase Bound already set
I0911 18:31:22.304865  110822 pv_controller.go:957] volume "pv-i-canbind" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind"
I0911 18:31:22.304890  110822 pv_controller.go:958] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.304904  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I0911 18:31:22.307150  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-canbind/binding: (2.774802ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.307802  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:31:22.311768  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (3.442463ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.405893  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-canbind: (1.693407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.408135  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind: (1.420775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.409750  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind: (1.304371ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.418731  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (8.414129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.424716  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (5.24497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.425661  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" deleted
I0911 18:31:22.425702  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32417
I0911 18:31:22.425731  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.425741  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind
I0911 18:31:22.431139  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind: (4.967275ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.431390  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind not found
I0911 18:31:22.431412  110822 pv_controller.go:575] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I0911 18:31:22.431427  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind]: set phase Released
I0911 18:31:22.436331  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (4.547828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.436738  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32439
I0911 18:31:22.436762  110822 pv_controller.go:798] volume "pv-i-canbind" entered phase "Released"
I0911 18:31:22.436773  110822 pv_controller.go:1011] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I0911 18:31:22.436801  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind" with version 32439
I0911 18:31:22.436827  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind (uid: baa4c0e4-fde4-4334-a76b-354f524fa1dd)", boundByController: true
I0911 18:31:22.436894  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind
I0911 18:31:22.436915  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind not found
I0911 18:31:22.436922  110822 pv_controller.go:1011] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I0911 18:31:22.438745  110822 store.go:228] deletion of /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-i-canbind failed because of a conflict, going to retry
I0911 18:31:22.449913  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (24.830028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.450413  110822 pv_controller_base.go:212] volume "pv-i-canbind" deleted
I0911 18:31:22.450467  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind" was already processed
I0911 18:31:22.474812  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (23.284613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.475029  110822 volume_binding_test.go:195] Running test immediate pv prebound
I0911 18:31:22.479838  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.160626ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.482174  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.596606ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.484669  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.607356ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.486595  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (1.536267ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.486732  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-prebound", version 32452
I0911 18:31:22.486828  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 18:31:22.486844  110822 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:22.486853  110822 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0911 18:31:22.487367  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound", version 32453
I0911 18:31:22.487397  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:22.487429  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Pending, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 18:31:22.487446  110822 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:22.487459  110822 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:22.487477  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0911 18:31:22.489271  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (2.109806ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
I0911 18:31:22.489375  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
I0911 18:31:22.489539  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
E0911 18:31:22.490012  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:22.490044  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I0911 18:31:22.491188  110822 store.go:362] GuaranteedUpdate of /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-i-prebound failed because of a conflict, going to retry
I0911 18:31:22.491394  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (3.398753ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:22.491570  110822 pv_controller.go:852] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:22.491589  110822 pv_controller.go:934] error binding volume "pv-i-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:22.491615  110822 pv_controller_base.go:246] could not sync claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:22.491796  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (4.600807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.492422  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 32455
I0911 18:31:22.492463  110822 pv_controller.go:798] volume "pv-i-prebound" entered phase "Available"
I0911 18:31:22.493470  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.197282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42346]
I0911 18:31:22.493681  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.441562ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42348]
I0911 18:31:22.494207  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 32455
I0911 18:31:22.494247  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 18:31:22.494262  110822 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:22.494273  110822 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0911 18:31:22.494289  110822 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0911 18:31:22.494374  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound/status: (4.033659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42272]
E0911 18:31:22.494663  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:22.592539  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.243222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.691872  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.524396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.796144  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.198202ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.893790  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.744805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:22.991678  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.412515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.091951  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.631981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.192644  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.964544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.292205  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.694216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.392180  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.836198ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.492721  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.895366ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.549363  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
I0911 18:31:23.549403  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
E0911 18:31:23.549659  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:23.549705  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E0911 18:31:23.549721  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:23.554554  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (3.809729ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:23.555060  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (4.899771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:23.592782  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.058743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:23.693418  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (3.057104ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:23.794155  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (3.339405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:23.892449  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.03458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:23.991923  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.562608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.092040  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.402024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.193023  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.759363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.292334  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.029438ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.392237  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.859827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.492046  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.727653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.591806  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.467178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.693226  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.8758ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.791877  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.512167ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.891977  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.599009ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:24.991884  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.540037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.091943  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.626992ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.191999  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.672403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.292568  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.760056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.392300  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.042568ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.492846  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.429705ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.591784  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.474599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.691794  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.529372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.792026  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.683487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.892174  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.861268ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:25.992110  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.771846ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.091610  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.329349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.192514  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.201577ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.291615  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.345827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.392068  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.705766ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.492014  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.712858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.592127  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.76651ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.692468  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.029755ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.791900  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.492114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.892427  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.078399ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:26.999313  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (8.949286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.091950  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.663697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.191931  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.617422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.291566  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.287248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.392105  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.656324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.492114  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.704056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.591962  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.626019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.692136  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.864665ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.792543  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.183851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.892690  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.292106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:27.992131  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.783499ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.091845  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.615326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.192146  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.792128ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.291630  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.364097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.392415  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.11212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.492003  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.60807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.592841  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.080745ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.692187  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.816219ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.792237  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.827393ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.892077  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.707019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:28.992067  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.761618ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.092100  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.78667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.192051  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.66294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.292166  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.84721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.392392  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.011051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.492074  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.676284ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.592170  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.793172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.691973  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.675063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.792075  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.718829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.892113  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.754673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:29.991805  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.423978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.092732  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.137389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.192032  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.655245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.292089  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.746782ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.392316  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.852046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.492066  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.756184ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.564385  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.646348ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.567293  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.374946ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.569037  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.23461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.592321  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.949235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.692101  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.752229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.791908  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.586312ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.892546  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.249515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:30.991982  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.554185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.092400  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.10471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.191952  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.53729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.291951  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.668903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.392445  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.064685ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.491917  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.551865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.591874  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.417572ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.692095  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.709153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.792178  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.784526ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.891971  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.612955ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:31.992044  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.762255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.092041  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.785929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.192466  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.815531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.293042  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.731866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.392068  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.685757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.491619  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.357961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.592040  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.719693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.691914  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.591494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.791778  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.495874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.892269  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.93216ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:32.992009  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.689252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.092263  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.906215ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.192291  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.00916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.291933  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.606753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.392226  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.883998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.492452  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.099637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.591992  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.651372ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.692286  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.937411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.793076  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.778245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.892200  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.880671ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:33.992143  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.714817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.091996  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.614951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.192878  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.540875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.292207  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.833893ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.392445  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.116062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.492288  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.998729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.591894  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.598652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.692031  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.692081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.791988  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.694675ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.892427  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.658645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:34.995448  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.5442ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.092339  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.025656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.191862  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.597658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.292063  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.747177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.392192  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.740316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.492170  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.821047ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.591755  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.412769ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.692083  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.712108ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.791928  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.603948ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.892301  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.936259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:35.992538  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.18373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.091768  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.482405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.192310  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.891265ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.291923  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.6032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.391994  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.657185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.491637  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.374063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.592056  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.415538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.691835  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.422432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.791862  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.477299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.853598  110822 pv_controller_base.go:419] resyncing PV controller
I0911 18:31:36.853713  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 32455
I0911 18:31:36.853753  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 18:31:36.853760  110822 pv_controller.go:506] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:36.853766  110822 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Available
I0911 18:31:36.853774  110822 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Available already set
I0911 18:31:36.853793  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" with version 32453
I0911 18:31:36.853804  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:36.853832  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: )", boundByController: false
I0911 18:31:36.853845  110822 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.853856  110822 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.853887  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I0911 18:31:36.856700  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (2.369748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.856936  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
I0911 18:31:36.856948  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
I0911 18:31:36.856998  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33817
I0911 18:31:36.857022  110822 pv_controller.go:862] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.857034  110822 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
E0911 18:31:36.857227  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:36.857256  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E0911 18:31:36.857267  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:36.857415  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33817
I0911 18:31:36.857453  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:36.857474  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:36.857531  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:36.857551  110822 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 18:31:36.861077  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.839027ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.861442  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33818
I0911 18:31:36.861467  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:36.861477  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:36.861517  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:36.861534  110822 pv_controller.go:606] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 18:31:36.861648  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33818
I0911 18:31:36.861668  110822 pv_controller.go:798] volume "pv-i-prebound" entered phase "Bound"
I0911 18:31:36.861679  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0911 18:31:36.861692  110822 pv_controller.go:901] volume "pv-i-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.861961  110822 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events/pod-i-pv-prebound.15c375dc2044a5e5: (3.531796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:36.862230  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (4.407756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:36.863692  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-pv-prebound: (1.79068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42342]
I0911 18:31:36.863888  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" with version 33820
I0911 18:31:36.863912  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I0911 18:31:36.863921  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound] status: set phase Bound
I0911 18:31:36.865627  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-pv-prebound/status: (1.544996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:36.865830  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" with version 33821
I0911 18:31:36.865855  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" entered phase "Bound"
I0911 18:31:36.865893  110822 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.865925  110822 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:36.865943  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0911 18:31:36.865975  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" with version 33821
I0911 18:31:36.865990  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0911 18:31:36.866018  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:36.866028  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: claim is already correctly bound
I0911 18:31:36.866089  110822 pv_controller.go:931] binding volume "pv-i-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.866100  110822 pv_controller.go:829] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.866117  110822 pv_controller.go:841] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.866127  110822 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Bound
I0911 18:31:36.866136  110822 pv_controller.go:780] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I0911 18:31:36.866145  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I0911 18:31:36.866163  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I0911 18:31:36.866172  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound] status: set phase Bound
I0911 18:31:36.866188  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound] status: phase Bound already set
I0911 18:31:36.866199  110822 pv_controller.go:957] volume "pv-i-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound"
I0911 18:31:36.866213  110822 pv_controller.go:958] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:36.866224  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I0911 18:31:36.892051  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.653514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:36.992018  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.664954ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.092342  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.965166ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.192247  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.787389ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.292184  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.852189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.391998  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.650588ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.492057  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.688259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.592221  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.754953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.692024  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.64517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.792130  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.864973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.892027  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.667229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:37.992120  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.769511ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.092027  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.700476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.191833  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.523534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.291810  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.492839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.392375  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (2.055149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.491981  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.698205ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.552037  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
I0911 18:31:38.552080  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound
I0911 18:31:38.552255  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound" match with Node "node-1"
I0911 18:31:38.552312  110822 scheduler_binder.go:646] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound": No matching NodeSelectorTerms
I0911 18:31:38.552358  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound", node "node-1"
I0911 18:31:38.552368  110822 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I0911 18:31:38.552413  110822 factory.go:606] Attempting to bind pod-i-pv-prebound to node-1
I0911 18:31:38.554918  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound/binding: (2.131797ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.555124  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:31:38.557124  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.500473ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.591882  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pv-prebound: (1.578295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.593643  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-pv-prebound: (1.265712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.594994  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.032282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.600485  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (5.141339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.604320  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (3.270019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.604727  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" deleted
I0911 18:31:38.604757  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33818
I0911 18:31:38.604781  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:38.604790  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:38.604806  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound not found
I0911 18:31:38.604815  110822 pv_controller.go:575] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I0911 18:31:38.604822  110822 pv_controller.go:777] updating PersistentVolume[pv-i-prebound]: set phase Released
I0911 18:31:38.606911  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (1.494082ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:38.607314  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33830
I0911 18:31:38.607333  110822 pv_controller.go:798] volume "pv-i-prebound" entered phase "Released"
I0911 18:31:38.607342  110822 pv_controller.go:1011] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0911 18:31:38.607360  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-prebound" with version 33830
I0911 18:31:38.607376  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound (uid: bd8f7c2e-e406-4758-badd-5583ff7a33eb)", boundByController: false
I0911 18:31:38.607384  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound
I0911 18:31:38.607399  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound not found
I0911 18:31:38.607403  110822 pv_controller.go:1011] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I0911 18:31:38.608641  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (3.791601ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.608751  110822 pv_controller_base.go:212] volume "pv-i-prebound" deleted
I0911 18:31:38.608778  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-pv-prebound" was already processed
I0911 18:31:38.614127  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.129091ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.614282  110822 volume_binding_test.go:195] Running test wait pv prebound
I0911 18:31:38.615872  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.322447ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.617656  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.073424ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.619439  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.359567ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.619708  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-prebound", version 33836
I0911 18:31:38.619807  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: )", boundByController: false
I0911 18:31:38.619869  110822 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound
I0911 18:31:38.619916  110822 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0911 18:31:38.622247  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (1.894107ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.622723  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound", version 33838
I0911 18:31:38.622748  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:38.622771  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Pending, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: )", boundByController: false
I0911 18:31:38.622782  110822 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.622791  110822 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.622805  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0911 18:31:38.623116  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.79363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:38.623267  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33837
I0911 18:31:38.623281  110822 pv_controller.go:798] volume "pv-w-prebound" entered phase "Available"
I0911 18:31:38.624033  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33837
I0911 18:31:38.624072  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: )", boundByController: false
I0911 18:31:38.624080  110822 pv_controller.go:506] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound
I0911 18:31:38.624088  110822 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Available
I0911 18:31:38.624097  110822 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Available already set
I0911 18:31:38.624647  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (1.742678ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.624749  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound
I0911 18:31:38.624862  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound
I0911 18:31:38.625086  110822 scheduler_binder.go:692] Found matching volumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound" on node "node-1"
I0911 18:31:38.625171  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" on node "node-2"
I0911 18:31:38.625196  110822 scheduler_binder.go:718] storage class "wait-wjwc" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" does not support dynamic provisioning
I0911 18:31:38.625239  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound", node "node-1"
I0911 18:31:38.625268  110822 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-prebound", version 33837
I0911 18:31:38.625321  110822 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound", node "node-1"
I0911 18:31:38.625337  110822 scheduler_binder.go:400] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I0911 18:31:38.625798  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.605013ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:38.626059  110822 pv_controller.go:852] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:38.626092  110822 pv_controller.go:934] error binding volume "pv-w-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:38.626108  110822 pv_controller_base.go:246] could not sync claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:38.627070  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.457867ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42282]
I0911 18:31:38.627194  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33840
I0911 18:31:38.627218  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:38.627227  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound
I0911 18:31:38.627241  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:38.627252  110822 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 18:31:38.627271  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" with version 33838
I0911 18:31:38.627281  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:38.627290  110822 scheduler_binder.go:406] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.627297  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:38.627310  110822 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.627317  110822 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.627326  110822 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.627332  110822 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0911 18:31:38.628752  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (1.178951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:38.628945  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33841
I0911 18:31:38.628970  110822 pv_controller.go:798] volume "pv-w-prebound" entered phase "Bound"
I0911 18:31:38.628983  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0911 18:31:38.629001  110822 pv_controller.go:901] volume "pv-w-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.629048  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33841
I0911 18:31:38.629089  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:38.629102  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound
I0911 18:31:38.629118  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:38.629132  110822 pv_controller.go:606] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I0911 18:31:38.630516  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-pv-prebound: (1.33272ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:38.630696  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" with version 33842
I0911 18:31:38.630722  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I0911 18:31:38.630729  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound] status: set phase Bound
I0911 18:31:38.632445  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-pv-prebound/status: (1.44213ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:38.632770  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" with version 33843
I0911 18:31:38.632800  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" entered phase "Bound"
I0911 18:31:38.632818  110822 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.632839  110822 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:38.632855  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0911 18:31:38.632883  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" with version 33843
I0911 18:31:38.632897  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0911 18:31:38.632914  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:38.632924  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: claim is already correctly bound
I0911 18:31:38.632931  110822 pv_controller.go:931] binding volume "pv-w-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.632939  110822 pv_controller.go:829] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.632956  110822 pv_controller.go:841] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.632965  110822 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Bound
I0911 18:31:38.632974  110822 pv_controller.go:780] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I0911 18:31:38.632983  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I0911 18:31:38.632999  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I0911 18:31:38.633008  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound] status: set phase Bound
I0911 18:31:38.633027  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound] status: phase Bound already set
I0911 18:31:38.633035  110822 pv_controller.go:957] volume "pv-w-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound"
I0911 18:31:38.633047  110822 pv_controller.go:958] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:38.633057  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I0911 18:31:38.727105  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.780248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:38.827184  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.825807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:38.927317  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.95346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.027286  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.975153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.126593  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.315414ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.226620  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.303616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.326738  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.412069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.427245  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.438225ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.526937  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.587432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.552223  110822 cache.go:669] Couldn't expire cache for pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound. Binding is still in progress.
I0911 18:31:39.626826  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.54803ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.627542  110822 scheduler_binder.go:546] All PVCs for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound" are bound
I0911 18:31:39.627592  110822 factory.go:606] Attempting to bind pod-w-pv-prebound to node-1
I0911 18:31:39.630759  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound/binding: (2.84164ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.630963  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:31:39.633659  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.430673ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.727024  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pv-prebound: (1.655308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.728965  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-pv-prebound: (1.332287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.730794  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.440802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.738410  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (7.222804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.743830  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (5.017765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.744332  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" deleted
I0911 18:31:39.744459  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33841
I0911 18:31:39.744648  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:39.744746  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound
I0911 18:31:39.744806  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound not found
I0911 18:31:39.744845  110822 pv_controller.go:575] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I0911 18:31:39.744884  110822 pv_controller.go:777] updating PersistentVolume[pv-w-prebound]: set phase Released
I0911 18:31:39.747420  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.196478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:39.747819  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33946
I0911 18:31:39.747841  110822 pv_controller.go:798] volume "pv-w-prebound" entered phase "Released"
I0911 18:31:39.747850  110822 pv_controller.go:1011] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0911 18:31:39.747890  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-prebound" with version 33946
I0911 18:31:39.747914  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound (uid: 2443135b-3031-45ca-a444-2cb7620029b7)", boundByController: false
I0911 18:31:39.747926  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound
I0911 18:31:39.747948  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound not found
I0911 18:31:39.747955  110822 pv_controller.go:1011] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I0911 18:31:39.749958  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (5.510538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.750327  110822 pv_controller_base.go:212] volume "pv-w-prebound" deleted
I0911 18:31:39.750364  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-pv-prebound" was already processed
I0911 18:31:39.755398  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.029534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.755723  110822 volume_binding_test.go:195] Running test mix immediate and wait
I0911 18:31:39.757439  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.486466ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.759675  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.280586ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.765342  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.699753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.765572  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind-4", version 33953
I0911 18:31:39.765602  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I0911 18:31:39.765626  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I0911 18:31:39.765634  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I0911 18:31:39.769910  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (3.916779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:39.769978  110822 httplog.go:90] POST /api/v1/persistentvolumes: (3.947047ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.770330  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33955
I0911 18:31:39.770364  110822 pv_controller.go:798] volume "pv-w-canbind-4" entered phase "Available"
I0911 18:31:39.770884  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-canbind-2", version 33954
I0911 18:31:39.770912  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I0911 18:31:39.770933  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I0911 18:31:39.770943  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I0911 18:31:39.774164  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (1.865735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.774216  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (2.773264ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:39.774651  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33957
I0911 18:31:39.774672  110822 pv_controller.go:798] volume "pv-i-canbind-2" entered phase "Available"
I0911 18:31:39.774801  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4", version 33956
I0911 18:31:39.774825  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:39.774854  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: no volume found
I0911 18:31:39.774878  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4] status: set phase Pending
I0911 18:31:39.774893  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4] status: phase Pending already set
I0911 18:31:39.775152  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-w-canbind-4", UID:"924f1d25-20ee-4996-8b3c-9b5ec2fd3b03", APIVersion:"v1", ResourceVersion:"33956", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 18:31:39.775871  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 33955
I0911 18:31:39.775910  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I0911 18:31:39.775969  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I0911 18:31:39.775980  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I0911 18:31:39.775990  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I0911 18:31:39.776049  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33957
I0911 18:31:39.776071  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I0911 18:31:39.776091  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I0911 18:31:39.776097  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I0911 18:31:39.776105  110822 pv_controller.go:780] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I0911 18:31:39.777467  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2", version 33958
I0911 18:31:39.777518  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:39.777530  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.302642ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.777551  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I0911 18:31:39.777564  110822 pv_controller.go:931] binding volume "pv-i-canbind-2" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.777578  110822 pv_controller.go:829] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.777602  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I0911 18:31:39.779933  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (1.914219ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.780146  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33960
I0911 18:31:39.780164  110822 pv_controller.go:862] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.780175  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I0911 18:31:39.780224  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33960
I0911 18:31:39.780260  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:39.780275  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2
I0911 18:31:39.780294  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:39.780309  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:39.780529  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (5.665265ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:39.784058  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound
I0911 18:31:39.784079  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound
E0911 18:31:39.784289  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:39.784312  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I0911 18:31:39.784524  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (2.501234ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:39.785253  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33961
I0911 18:31:39.785387  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:39.785403  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2
I0911 18:31:39.785417  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:39.785430  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:39.786578  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (6.249781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.786799  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33961
I0911 18:31:39.786824  110822 pv_controller.go:798] volume "pv-i-canbind-2" entered phase "Bound"
I0911 18:31:39.786837  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I0911 18:31:39.786856  110822 pv_controller.go:901] volume "pv-i-canbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.787507  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.322522ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:39.787977  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound/status: (2.895107ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
E0911 18:31:39.788298  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:39.788305  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (3.164545ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44948]
I0911 18:31:39.788380  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound
I0911 18:31:39.788393  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound
E0911 18:31:39.788990  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:39.789027  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
E0911 18:31:39.789042  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:39.789379  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind-2: (2.270815ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.789592  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" with version 33965
I0911 18:31:39.789618  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I0911 18:31:39.789629  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2] status: set phase Bound
I0911 18:31:39.791101  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.731559ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:39.791775  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind-2/status: (1.838755ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.791936  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" with version 33967
I0911 18:31:39.791973  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" entered phase "Bound"
I0911 18:31:39.791989  110822 pv_controller.go:957] volume "pv-i-canbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.791998  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.501937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45392]
I0911 18:31:39.792006  110822 pv_controller.go:958] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:39.792021  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I0911 18:31:39.792044  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" with version 33967
I0911 18:31:39.792061  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I0911 18:31:39.792073  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:39.792087  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: claim is already correctly bound
I0911 18:31:39.792094  110822 pv_controller.go:931] binding volume "pv-i-canbind-2" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.792102  110822 pv_controller.go:829] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.792114  110822 pv_controller.go:841] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.792121  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I0911 18:31:39.792127  110822 pv_controller.go:780] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I0911 18:31:39.792133  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I0911 18:31:39.792145  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I0911 18:31:39.792157  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2] status: set phase Bound
I0911 18:31:39.792172  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2] status: phase Bound already set
I0911 18:31:39.792188  110822 pv_controller.go:957] volume "pv-i-canbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2"
I0911 18:31:39.792207  110822 pv_controller.go:958] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:39.792230  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
E0911 18:31:39.792280  110822 factory.go:581] pod is already present in the backoffQ
I0911 18:31:39.888454  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.96204ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:39.988334  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.760944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.087323  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.87419ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.186948  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.3989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.287180  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.608862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.386950  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.456432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.487096  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.45418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.564605  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.733657ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.566303  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.351809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.568125  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.346311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.589802  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.594792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.688152  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.587683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.787780  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.282269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.886987  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.511917ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:40.987141  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.547386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.087082  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.604149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.186914  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.502799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.286969  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.508878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.386893  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.355898ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.487111  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.509779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.554476  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound
I0911 18:31:41.554588  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound
I0911 18:31:41.554839  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound" match with Node "node-1"
I0911 18:31:41.554891  110822 scheduler_binder.go:692] Found matching volumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound" on node "node-1"
I0911 18:31:41.554991  110822 scheduler_binder.go:646] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound": No matching NodeSelectorTerms
I0911 18:31:41.555021  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" on node "node-2"
I0911 18:31:41.555037  110822 scheduler_binder.go:718] storage class "wait-kc6h" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" does not support dynamic provisioning
I0911 18:31:41.555094  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound", node "node-1"
I0911 18:31:41.555149  110822 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-canbind-4", version 33955
I0911 18:31:41.555203  110822 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound", node "node-1"
I0911 18:31:41.555220  110822 scheduler_binder.go:400] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I0911 18:31:41.558396  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (2.702347ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.558855  110822 scheduler_binder.go:406] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.559057  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 34227
I0911 18:31:41.559108  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:41.559117  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4
I0911 18:31:41.559131  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:41.559143  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:41.559165  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" with version 33956
I0911 18:31:41.559175  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:41.559204  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:41.559213  110822 pv_controller.go:931] binding volume "pv-w-canbind-4" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.559225  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.559239  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.559251  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I0911 18:31:41.562014  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.503765ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.562354  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 34229
I0911 18:31:41.562389  110822 pv_controller.go:798] volume "pv-w-canbind-4" entered phase "Bound"
I0911 18:31:41.562408  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I0911 18:31:41.562424  110822 pv_controller.go:901] volume "pv-w-canbind-4" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.563569  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 34229
I0911 18:31:41.563623  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:41.563642  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4
I0911 18:31:41.563661  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:41.563677  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:41.565033  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-4: (2.337507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.565470  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" with version 34231
I0911 18:31:41.565517  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I0911 18:31:41.565531  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4] status: set phase Bound
I0911 18:31:41.567883  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-4/status: (2.032232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.568130  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" with version 34234
I0911 18:31:41.568160  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" entered phase "Bound"
I0911 18:31:41.568179  110822 pv_controller.go:957] volume "pv-w-canbind-4" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.568208  110822 pv_controller.go:958] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:41.568223  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I0911 18:31:41.568261  110822 pv_controller_base.go:526] storeObjectUpdate: ignoring claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" version 34231
I0911 18:31:41.570169  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" with version 34234
I0911 18:31:41.570212  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I0911 18:31:41.570236  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:41.570250  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: claim is already correctly bound
I0911 18:31:41.570263  110822 pv_controller.go:931] binding volume "pv-w-canbind-4" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.570275  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.570296  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.570314  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I0911 18:31:41.570323  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I0911 18:31:41.570333  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I0911 18:31:41.570353  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I0911 18:31:41.570372  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4] status: set phase Bound
I0911 18:31:41.570390  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4] status: phase Bound already set
I0911 18:31:41.570403  110822 pv_controller.go:957] volume "pv-w-canbind-4" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4"
I0911 18:31:41.570424  110822 pv_controller.go:958] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:41.570439  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I0911 18:31:41.587396  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.842863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.691175  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (5.417623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.787182  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.633515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.887583  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.092827ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:41.987155  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.651781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.087372  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.93412ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.197720  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (11.5007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.287088  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.57851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.389010  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.999479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.487305  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (1.799837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.552777  110822 cache.go:669] Couldn't expire cache for pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound. Binding is still in progress.
I0911 18:31:42.559112  110822 scheduler_binder.go:546] All PVCs for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound" are bound
I0911 18:31:42.559169  110822 factory.go:606] Attempting to bind pod-mix-bound to node-1
I0911 18:31:42.562839  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound/binding: (3.356822ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.563096  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-mix-bound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:31:42.565734  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.360607ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.588207  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-mix-bound: (2.345472ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.591192  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-4: (1.334518ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.593982  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind-2: (2.347593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.603595  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-4: (9.241781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.606001  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind-2: (1.858253ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.616945  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (10.524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.622464  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" deleted
I0911 18:31:42.622537  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 33961
I0911 18:31:42.622573  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:42.622588  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2
I0911 18:31:42.627875  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-canbind-2: (4.976459ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:42.628748  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 not found
I0911 18:31:42.628948  110822 pv_controller.go:575] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I0911 18:31:42.629047  110822 pv_controller.go:777] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I0911 18:31:42.633226  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (3.725679ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:42.633524  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (16.024368ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.634004  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 34333
I0911 18:31:42.634177  110822 pv_controller.go:798] volume "pv-i-canbind-2" entered phase "Released"
I0911 18:31:42.634414  110822 pv_controller.go:1011] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I0911 18:31:42.634918  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" deleted
I0911 18:31:42.634953  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 34229
I0911 18:31:42.634979  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:42.634991  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4
I0911 18:31:42.654555  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-4: (19.400418ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:42.654918  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 not found
I0911 18:31:42.654948  110822 pv_controller.go:575] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I0911 18:31:42.654961  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I0911 18:31:42.657961  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.687837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:42.658377  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 34336
I0911 18:31:42.658406  110822 pv_controller.go:798] volume "pv-w-canbind-4" entered phase "Released"
I0911 18:31:42.658418  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I0911 18:31:42.658446  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-canbind-2" with version 34333
I0911 18:31:42.658471  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 (uid: 47986a15-0198-4b83-9d66-a3ce1dd29a69)", boundByController: true
I0911 18:31:42.658489  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2
I0911 18:31:42.658542  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2 not found
I0911 18:31:42.658549  110822 pv_controller.go:1011] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I0911 18:31:42.658564  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-4" with version 34336
I0911 18:31:42.658581  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 (uid: 924f1d25-20ee-4996-8b3c-9b5ec2fd3b03)", boundByController: true
I0911 18:31:42.658597  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4
I0911 18:31:42.658615  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4 not found
I0911 18:31:42.658621  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I0911 18:31:42.663213  110822 pv_controller_base.go:212] volume "pv-i-canbind-2" deleted
I0911 18:31:42.663250  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-canbind-2" was already processed
I0911 18:31:42.668888  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (34.778275ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.670048  110822 pv_controller_base.go:212] volume "pv-w-canbind-4" deleted
I0911 18:31:42.670092  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-4" was already processed
I0911 18:31:42.677958  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (7.787359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.678146  110822 volume_binding_test.go:195] Running test immediate cannot bind
I0911 18:31:42.680682  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.342507ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.688616  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (6.566586ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.692156  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (2.67131ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.692413  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-cannotbind", version 34346
I0911 18:31:42.692449  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:42.692472  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-cannotbind]: no volume found
I0911 18:31:42.692487  110822 pv_controller.go:1326] provisionClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-cannotbind]: started
E0911 18:31:42.692525  110822 pv_controller.go:1331] error finding provisioning plugin for claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-cannotbind: no volume plugin matched
I0911 18:31:42.692709  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-i-cannotbind", UID:"84812a2c-f529-4470-acdd-f4215e133e78", APIVersion:"v1", ResourceVersion:"34346", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I0911 18:31:42.695749  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (2.966172ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
I0911 18:31:42.696201  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-cannotbind
I0911 18:31:42.696220  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-cannotbind
E0911 18:31:42.696409  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-cannotbind: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:42.696443  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I0911 18:31:42.697219  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (4.436515ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.704882  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (6.407431ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46350]
I0911 18:31:42.707046  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-cannotbind/status: (9.444822ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45394]
E0911 18:31:42.707337  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:42.707736  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-cannotbind: (9.304086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46348]
I0911 18:31:42.802030  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-cannotbind: (3.745771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.806202  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-cannotbind: (3.643773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.811258  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-cannotbind
I0911 18:31:42.811468  110822 scheduler.go:526] Skip schedule deleting pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-cannotbind
I0911 18:31:42.813655  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.723899ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46346]
I0911 18:31:42.813872  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (7.084124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.818887  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-cannotbind" deleted
I0911 18:31:42.819327  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (5.135111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.822714  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (2.897649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.832405  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.122772ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.832606  110822 volume_binding_test.go:195] Running test immediate pvc prebound
I0911 18:31:42.834347  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.482618ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.836664  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.716066ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.838995  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.664492ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.839762  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 34370
I0911 18:31:42.839796  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I0911 18:31:42.839818  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I0911 18:31:42.839827  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I0911 18:31:42.841954  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (1.893337ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.842458  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound", version 34371
I0911 18:31:42.842534  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:31:42.842553  110822 pv_controller.go:347] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I0911 18:31:42.842573  110822 pv_controller.go:366] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I0911 18:31:42.842593  110822 pv_controller.go:370] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume is unbound, binding
I0911 18:31:42.842613  110822 pv_controller.go:931] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:42.842625  110822 pv_controller.go:829] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:42.842646  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I0911 18:31:42.842670  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.652823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46346]
I0911 18:31:42.842994  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34372
I0911 18:31:42.843018  110822 pv_controller.go:798] volume "pv-i-pvc-prebound" entered phase "Available"
I0911 18:31:42.843203  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34372
I0911 18:31:42.843226  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0911 18:31:42.843246  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I0911 18:31:42.843253  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I0911 18:31:42.843262  110822 pv_controller.go:780] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I0911 18:31:42.846833  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (2.832842ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46346]
I0911 18:31:42.847892  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound
I0911 18:31:42.847917  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound
E0911 18:31:42.848113  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:42.848145  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0911 18:31:42.850104  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (973.704µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46352]
I0911 18:31:42.851222  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.981318ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:42.853051  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound/status: (4.015109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46346]
E0911 18:31:42.853726  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:42.854620  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (11.748485ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44968]
I0911 18:31:42.854862  110822 pv_controller.go:852] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:42.854891  110822 pv_controller.go:934] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:42.854906  110822 pv_controller_base.go:246] could not sync claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:42.949374  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.568252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.051410  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.932282ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.149080  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.593326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.250639  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.941374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.349636  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.730498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.448902  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.349995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.549457  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.753811ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.649218  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.659748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.748967  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.534829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.849754  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.048672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:43.949076  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.523599ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.049374  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.786689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.148639  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.251485ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.249594  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.948403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.349308  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.755089ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.449193  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.555326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.549120  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.610019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.649050  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.472068ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.749261  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.669672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.849147  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.557446ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:44.948941  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.402252ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.049286  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.634809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.149281  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.794084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.249130  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.504976ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.349605  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.024195ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.451304  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (3.846189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.549188  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.605731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.648940  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.400425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.749457  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.900062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.849201  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.619952ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:45.949382  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.737277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.049405  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.781456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.149257  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.663335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.249135  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.598214ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.349199  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.656139ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.449141  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.547515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.549191  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.618514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.649022  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.421748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.748922  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.478506ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.848954  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.453808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:46.949045  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.487268ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.049132  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.627358ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.149324  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.818415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.249252  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.625118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.349696  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.156437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.449447  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.896297ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.548987  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.48182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.648861  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.391652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.749396  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.825276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.849425  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.957862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:47.949235  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.67473ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.049194  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.643983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.149124  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.568942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.249245  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.752877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.349229  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.606938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.449414  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.911037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.549079  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.60198ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.649092  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.624323ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.749305  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.708773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.849866  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.327968ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:48.949061  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.44624ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.049392  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.731659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.149062  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.633989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.249035  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.455998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.349423  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.95137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.449241  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.616381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.549269  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.708015ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.649241  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.6743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.749212  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.699983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.851718  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (4.214821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:49.949935  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.382686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.049279  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.745704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.149308  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.799635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.249312  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.762391ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.349450  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.86861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.449165  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.509756ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.549275  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.774742ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.564716  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.788351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.566476  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.250218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.568056  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.192211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.649409  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.852428ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.749068  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.487069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.849489  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.913129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:50.949245  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.691655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.049284  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.612825ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.149218  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.724794ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.249339  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.723764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.349332  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.753678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.449073  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.585552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.549401  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.742947ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.649429  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.888341ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.750756  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (3.072106ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.849280  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.710776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.853805  110822 pv_controller_base.go:419] resyncing PV controller
I0911 18:31:51.853926  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34372
I0911 18:31:51.853976  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0911 18:31:51.853996  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I0911 18:31:51.854005  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I0911 18:31:51.854003  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" with version 34371
I0911 18:31:51.854020  110822 pv_controller.go:780] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I0911 18:31:51.854049  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:31:51.854071  110822 pv_controller.go:347] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I0911 18:31:51.854090  110822 pv_controller.go:366] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I0911 18:31:51.854108  110822 pv_controller.go:370] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume is unbound, binding
I0911 18:31:51.854125  110822 pv_controller.go:931] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.854136  110822 pv_controller.go:829] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.854181  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I0911 18:31:51.857299  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound
I0911 18:31:51.857544  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound
E0911 18:31:51.857892  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:51.857609  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (2.813855ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.857928  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E0911 18:31:51.857964  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:51.858187  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34882
I0911 18:31:51.858224  110822 pv_controller.go:862] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.858235  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0911 18:31:51.860894  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.461271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46352]
I0911 18:31:51.861543  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (3.012773ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:51.862380  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34883
I0911 18:31:51.862410  110822 pv_controller.go:798] volume "pv-i-pvc-prebound" entered phase "Bound"
I0911 18:31:51.862421  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I0911 18:31:51.862448  110822 pv_controller.go:901] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.863525  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.864894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:51.863737  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34882
I0911 18:31:51.863771  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound (uid: f87e6a45-e5e9-4808-9289-ffbfdad07400)", boundByController: true
I0911 18:31:51.863780  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound
I0911 18:31:51.863890  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:31:51.863902  110822 pv_controller.go:619] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I0911 18:31:51.863910  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0911 18:31:51.864907  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-prebound: (2.22476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46352]
I0911 18:31:51.865158  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" with version 34886
I0911 18:31:51.865188  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I0911 18:31:51.865199  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound] status: set phase Bound
I0911 18:31:51.866463  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (2.262153ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:51.866961  110822 pv_controller.go:790] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:51.866978  110822 pv_controller_base.go:202] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:51.867005  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34883
I0911 18:31:51.867027  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound (uid: f87e6a45-e5e9-4808-9289-ffbfdad07400)", boundByController: true
I0911 18:31:51.867035  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound
I0911 18:31:51.867048  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:31:51.867056  110822 pv_controller.go:619] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I0911 18:31:51.867061  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0911 18:31:51.867067  110822 pv_controller.go:780] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I0911 18:31:51.867068  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-prebound/status: (1.588992ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46352]
I0911 18:31:51.867283  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" with version 34887
I0911 18:31:51.867301  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" entered phase "Bound"
I0911 18:31:51.867313  110822 pv_controller.go:957] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.867327  110822 pv_controller.go:958] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound (uid: f87e6a45-e5e9-4808-9289-ffbfdad07400)", boundByController: true
I0911 18:31:51.867338  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:31:51.867367  110822 pv_controller_base.go:526] storeObjectUpdate: ignoring claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" version 34886
I0911 18:31:51.867662  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" with version 34887
I0911 18:31:51.867678  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:31:51.867690  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound (uid: f87e6a45-e5e9-4808-9289-ffbfdad07400)", boundByController: true
I0911 18:31:51.867700  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: claim is already correctly bound
I0911 18:31:51.867709  110822 pv_controller.go:931] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.867716  110822 pv_controller.go:829] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.867730  110822 pv_controller.go:841] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.867739  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I0911 18:31:51.867744  110822 pv_controller.go:780] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I0911 18:31:51.867751  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I0911 18:31:51.867772  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I0911 18:31:51.867794  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound] status: set phase Bound
I0911 18:31:51.867821  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound] status: phase Bound already set
I0911 18:31:51.867834  110822 pv_controller.go:957] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound"
I0911 18:31:51.867846  110822 pv_controller.go:958] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound (uid: f87e6a45-e5e9-4808-9289-ffbfdad07400)", boundByController: true
I0911 18:31:51.867855  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:31:51.951860  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (3.794457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.053306  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (5.727114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.149078  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.610666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.252053  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (4.425725ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.349708  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.124238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.451365  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (3.802064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.549256  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.643654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.650727  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (3.137359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.750050  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.347082ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.850843  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.40683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:52.949013  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.556892ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.049232  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.767497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.149165  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.70924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.249617  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.667779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.349985  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.40413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.455174  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (7.559259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.549075  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.576754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.650980  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.482666ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.749286  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.733971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.849061  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.497981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:53.949298  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.672735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.049302  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.773375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.149702  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.244026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.250320  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (2.07533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.350990  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.9467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.449170  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.591239ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.549222  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.668578ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.556299  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound
I0911 18:31:54.556332  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound
I0911 18:31:54.556547  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound" match with Node "node-1"
I0911 18:31:54.556640  110822 scheduler_binder.go:646] PersistentVolume "pv-i-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound": No matching NodeSelectorTerms
I0911 18:31:54.556705  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound", node "node-1"
I0911 18:31:54.556721  110822 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I0911 18:31:54.556763  110822 factory.go:606] Attempting to bind pod-i-pvc-prebound to node-1
I0911 18:31:54.559472  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound/binding: (2.154592ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.560486  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:31:54.563328  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.488201ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.649308  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-i-pvc-prebound: (1.749937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.651458  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-prebound: (1.459479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.652998  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (1.189234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.663167  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (9.806224ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.673151  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (9.288432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.674127  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" deleted
I0911 18:31:54.674169  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34883
I0911 18:31:54.674195  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound (uid: f87e6a45-e5e9-4808-9289-ffbfdad07400)", boundByController: true
I0911 18:31:54.674203  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound
I0911 18:31:54.676229  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-i-prebound: (1.124751ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.677153  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound not found
I0911 18:31:54.677189  110822 pv_controller.go:575] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I0911 18:31:54.677203  110822 pv_controller.go:777] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I0911 18:31:54.679214  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (4.824744ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.679335  110822 store.go:362] GuaranteedUpdate of /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I0911 18:31:54.679488  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (1.956345ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.679755  110822 pv_controller.go:790] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e567b88e-dc1d-4c30-ae8f-39a27b877160, UID in object meta: 
I0911 18:31:54.679780  110822 pv_controller_base.go:202] could not sync volume "pv-i-pvc-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": StorageError: invalid object, Code: 4, Key: /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-i-pvc-prebound, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e567b88e-dc1d-4c30-ae8f-39a27b877160, UID in object meta: 
I0911 18:31:54.679976  110822 pv_controller_base.go:212] volume "pv-i-pvc-prebound" deleted
I0911 18:31:54.680021  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-i-prebound" was already processed
I0911 18:31:54.688461  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.603668ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.688708  110822 volume_binding_test.go:195] Running test wait can bind
I0911 18:31:54.690265  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.309609ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.693270  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.340517ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.695127  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.497196ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.695566  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind", version 35511
I0911 18:31:54.695595  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I0911 18:31:54.695616  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0911 18:31:54.695622  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0911 18:31:54.697308  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (1.481439ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.697618  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35513
I0911 18:31:54.697647  110822 pv_controller.go:798] volume "pv-w-canbind" entered phase "Available"
I0911 18:31:54.697822  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind", version 35512
I0911 18:31:54.697846  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:54.697874  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: no volume found
I0911 18:31:54.697894  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind] status: set phase Pending
I0911 18:31:54.697916  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind] status: phase Pending already set
I0911 18:31:54.698135  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-w-canbind", UID:"def05412-abb3-4f6a-9f4c-c96d7078f99a", APIVersion:"v1", ResourceVersion:"35512", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 18:31:54.697308  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (1.481071ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.698272  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35513
I0911 18:31:54.698298  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I0911 18:31:54.698317  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I0911 18:31:54.698325  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Available
I0911 18:31:54.698333  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Available already set
I0911 18:31:54.700898  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.506208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:54.701296  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (2.713589ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.701678  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind
I0911 18:31:54.701706  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind
I0911 18:31:54.701936  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" on node "node-2"
I0911 18:31:54.701962  110822 scheduler_binder.go:718] storage class "wait-m9kh" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" does not support dynamic provisioning
I0911 18:31:54.702314  110822 scheduler_binder.go:692] Found matching volumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind" on node "node-1"
I0911 18:31:54.702428  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind", node "node-1"
I0911 18:31:54.702508  110822 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-canbind", version 35513
I0911 18:31:54.702613  110822 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind", node "node-1"
I0911 18:31:54.702633  110822 scheduler_binder.go:400] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" bound to volume "pv-w-canbind"
I0911 18:31:54.704185  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind: (1.340971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.704397  110822 scheduler_binder.go:406] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.705275  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35516
I0911 18:31:54.705317  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:54.705330  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind
I0911 18:31:54.705348  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:54.705363  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:54.705396  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" with version 35512
I0911 18:31:54.705415  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:54.705441  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:54.705473  110822 pv_controller.go:931] binding volume "pv-w-canbind" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.705484  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.705518  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.705527  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Bound
I0911 18:31:54.707658  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (1.85491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.708068  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35517
I0911 18:31:54.708110  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:54.708123  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind
I0911 18:31:54.708140  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:54.708155  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:31:54.708794  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35517
I0911 18:31:54.708829  110822 pv_controller.go:798] volume "pv-w-canbind" entered phase "Bound"
I0911 18:31:54.708851  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: binding to "pv-w-canbind"
I0911 18:31:54.708866  110822 pv_controller.go:901] volume "pv-w-canbind" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.712957  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind: (3.88933ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.713252  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" with version 35520
I0911 18:31:54.713295  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: bound to "pv-w-canbind"
I0911 18:31:54.713306  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind] status: set phase Bound
I0911 18:31:54.717837  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind/status: (4.248391ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.718285  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" with version 35523
I0911 18:31:54.718325  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" entered phase "Bound"
I0911 18:31:54.718339  110822 pv_controller.go:957] volume "pv-w-canbind" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.718360  110822 pv_controller.go:958] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:54.718399  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I0911 18:31:54.718427  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" with version 35523
I0911 18:31:54.718441  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I0911 18:31:54.718457  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:54.718580  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: claim is already correctly bound
I0911 18:31:54.718620  110822 pv_controller.go:931] binding volume "pv-w-canbind" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.718631  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.718651  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.718661  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Bound
I0911 18:31:54.718696  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I0911 18:31:54.718714  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: binding to "pv-w-canbind"
I0911 18:31:54.718757  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind]: already bound to "pv-w-canbind"
I0911 18:31:54.718777  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind] status: set phase Bound
I0911 18:31:54.718797  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind] status: phase Bound already set
I0911 18:31:54.718809  110822 pv_controller.go:957] volume "pv-w-canbind" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind"
I0911 18:31:54.718831  110822 pv_controller.go:958] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:54.718844  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I0911 18:31:54.804433  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.987579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:54.903722  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.681097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.003343  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.333813ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.103748  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.783778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.210266  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (8.234403ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.306104  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (2.444331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.403962  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.935982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.504513  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (2.262995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.556372  110822 cache.go:669] Couldn't expire cache for pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind. Binding is still in progress.
I0911 18:31:55.603897  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.856166ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.703864  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (1.924353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.704683  110822 scheduler_binder.go:546] All PVCs for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind" are bound
I0911 18:31:55.704741  110822 factory.go:606] Attempting to bind pod-w-canbind to node-1
I0911 18:31:55.707041  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind/binding: (2.013737ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.707535  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:31:55.709575  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.705485ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.808462  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind: (6.410227ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.810642  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind: (1.481249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.812544  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.402602ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.826354  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (13.24567ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.830251  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (3.428575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.830838  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" deleted
I0911 18:31:55.830887  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35517
I0911 18:31:55.830928  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:55.830938  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind
I0911 18:31:55.832128  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind: (1.019158ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:55.832380  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind not found
I0911 18:31:55.832404  110822 pv_controller.go:575] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I0911 18:31:55.832413  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind]: set phase Released
I0911 18:31:55.835036  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.357458ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:55.836286  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35811
I0911 18:31:55.836324  110822 pv_controller.go:798] volume "pv-w-canbind" entered phase "Released"
I0911 18:31:55.836333  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I0911 18:31:55.836614  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind" with version 35811
I0911 18:31:55.836648  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind (uid: def05412-abb3-4f6a-9f4c-c96d7078f99a)", boundByController: true
I0911 18:31:55.836659  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind
I0911 18:31:55.836680  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind not found
I0911 18:31:55.836688  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I0911 18:31:55.837540  110822 store.go:228] deletion of /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-w-canbind failed because of a conflict, going to retry
I0911 18:31:55.839763  110822 pv_controller_base.go:212] volume "pv-w-canbind" deleted
I0911 18:31:55.839794  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind" was already processed
I0911 18:31:55.840100  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.278767ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.848910  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.52791ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.849453  110822 volume_binding_test.go:195] Running test wait cannot bind
I0911 18:31:55.851253  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.618715ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.853399  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.426704ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.856744  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (2.617972ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.857709  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind", version 35818
I0911 18:31:55.857739  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:31:55.857759  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind]: no volume found
I0911 18:31:55.857810  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind] status: set phase Pending
I0911 18:31:55.857845  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind] status: phase Pending already set
I0911 18:31:55.857992  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-w-cannotbind", UID:"fe39f895-3685-47f9-9791-57e107bb64a0", APIVersion:"v1", ResourceVersion:"35818", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 18:31:55.860887  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (2.786181ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.861410  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind
I0911 18:31:55.861423  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind
I0911 18:31:55.861592  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind" on node "node-1"
I0911 18:31:55.861615  110822 scheduler_binder.go:718] storage class "wait-knlr" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind" does not support dynamic provisioning
I0911 18:31:55.861657  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind" on node "node-2"
I0911 18:31:55.861666  110822 scheduler_binder.go:718] storage class "wait-knlr" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind" does not support dynamic provisioning
I0911 18:31:55.861700  110822 factory.go:541] Unable to schedule volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I0911 18:31:55.861729  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I0911 18:31:55.863054  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (3.855146ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:55.865122  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind: (1.435464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47336]
I0911 18:31:55.865629  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.922675ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48954]
I0911 18:31:55.865968  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind/status: (2.487616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46354]
I0911 18:31:55.868264  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind: (1.388781ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.868821  110822 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind on any node.
I0911 18:31:55.965719  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind: (2.667463ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.967722  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-cannotbind: (1.36693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.972570  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind
I0911 18:31:55.972611  110822 scheduler.go:526] Skip schedule deleting pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind
I0911 18:31:55.974591  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.652201ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0911 18:31:55.974963  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (6.79508ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.980604  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (5.298852ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.980824  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind" deleted
I0911 18:31:55.982571  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.325854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.993804  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (10.88598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:55.994096  110822 volume_binding_test.go:195] Running test wait pvc prebound
I0911 18:31:55.997138  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.050124ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.000266  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.401414ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.004330  110822 httplog.go:90] POST /api/v1/persistentvolumes: (2.692846ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.004545  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 35840
I0911 18:31:56.004573  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I0911 18:31:56.004589  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I0911 18:31:56.004978  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I0911 18:31:56.008010  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound", version 35841
I0911 18:31:56.008153  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:31:56.013707  110822 pv_controller.go:347] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I0911 18:31:56.013879  110822 pv_controller.go:366] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I0911 18:31:56.013945  110822 pv_controller.go:370] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume is unbound, binding
I0911 18:31:56.013992  110822 pv_controller.go:931] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:31:56.014045  110822 pv_controller.go:829] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:31:56.014129  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I0911 18:31:56.016891  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (11.868495ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.017171  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (11.919715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0911 18:31:56.017590  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 35842
I0911 18:31:56.017634  110822 pv_controller.go:798] volume "pv-w-pvc-prebound" entered phase "Available"
I0911 18:31:56.017666  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 35842
I0911 18:31:56.017683  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0911 18:31:56.017704  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I0911 18:31:56.017711  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I0911 18:31:56.017719  110822 pv_controller.go:780] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I0911 18:31:56.029307  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (14.389945ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49010]
I0911 18:31:56.029996  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (12.19938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48952]
I0911 18:31:56.030271  110822 pv_controller.go:852] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:56.030302  110822 pv_controller.go:934] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:56.030320  110822 pv_controller_base.go:246] could not sync claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:31:56.031212  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
I0911 18:31:56.031403  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
E0911 18:31:56.031693  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:56.031822  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0911 18:31:56.036965  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (3.967567ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.037940  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (5.680255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49010]
I0911 18:31:56.038049  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound/status: (4.049615ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
E0911 18:31:56.038475  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:56.038569  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
I0911 18:31:56.038623  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
E0911 18:31:56.038893  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:31:56.038954  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E0911 18:31:56.038970  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:31:56.041484  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.062835ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:31:56.042722  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.853178ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
E0911 18:31:56.043042  110822 factory.go:581] pod is already present in unschedulableQ
I0911 18:31:56.133571  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.766587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.233228  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.448349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.333245  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.489208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.433470  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.698226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.533269  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.561874ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.633268  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.583647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.733083  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.32503ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.833516  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.730237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:56.933981  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.181819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.033451  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.67226ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.133519  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.803637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.233591  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.867786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.333048  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.312829ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.433846  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.039841ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.533398  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.707717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.633467  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.709745ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.733850  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.022353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.833647  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.835981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:57.933543  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.712498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.033448  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.641351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.133134  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.501346ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.233142  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.488193ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.333216  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.491788ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.433223  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.491687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.533484  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.757117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.633518  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.788786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.733861  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.066929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.833730  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.96533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:58.933560  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.817306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.033651  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.82548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.133443  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.711819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.233359  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.591104ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.333224  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.517957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.433749  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.986605ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.533808  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.042406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.633567  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.83304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.733753  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.72783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.833367  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.626721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:31:59.934103  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.061775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.033436  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.78822ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.133432  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.70693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.272890  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (41.188634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.333310  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.558016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.433916  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.160126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.533170  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.450928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.564219  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.213024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.566164  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.518743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.567814  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.318084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.633053  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.345361ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.733264  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.528293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.833582  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.587145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:00.933220  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.484703ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.033200  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.54265ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.133786  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.122886ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.235143  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.723175ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.335970  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (3.388704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.435960  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.266576ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.540579  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (8.860259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.635074  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (3.367936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.737959  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (6.279456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.833987  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.296513ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:01.933334  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.666187ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.033153  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.439774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.133844  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.578844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.233324  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.643232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.333358  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.648822ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.433523  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.65713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.533234  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.566563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.633270  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.588608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.733283  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.577447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.833258  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.578356ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:02.933229  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.569721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.033230  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.584717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.133141  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.461682ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.235033  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (3.280272ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.333273  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.569296ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.433329  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.614328ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.533045  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.349734ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.632893  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.279351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.733743  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.936445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.833697  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.756317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:03.933216  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.497387ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.033076  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.408621ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.132982  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.357402ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.233099  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.357505ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.333081  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.301107ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.433350  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.585797ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.533142  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.443421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.633106  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.49999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.733756  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.964938ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.834324  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.563272ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:04.933421  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.737453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.033482  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.749067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.133328  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.554071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.233342  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.613464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.333478  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.756429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.433752  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.97552ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.534015  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.325538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.633568  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.825971ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.734393  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.671975ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.833364  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.664914ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:05.933835  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.016095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.033184  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.484315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.133387  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.647548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.233292  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.584081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.333322  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.62644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.434621  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.312591ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.534052  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.292566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.633413  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.681957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.733199  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.51525ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.833169  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.426888ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.854045  110822 pv_controller_base.go:419] resyncing PV controller
I0911 18:32:06.854184  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 35842
I0911 18:32:06.854226  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I0911 18:32:06.854246  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I0911 18:32:06.854254  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I0911 18:32:06.854263  110822 pv_controller.go:780] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I0911 18:32:06.854288  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" with version 35841
I0911 18:32:06.854313  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:32:06.854343  110822 pv_controller.go:347] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I0911 18:32:06.854359  110822 pv_controller.go:366] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I0911 18:32:06.854380  110822 pv_controller.go:370] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume is unbound, binding
I0911 18:32:06.854405  110822 pv_controller.go:931] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:06.854417  110822 pv_controller.go:829] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:06.854453  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I0911 18:32:06.857276  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (2.382509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.857553  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38127
I0911 18:32:06.857579  110822 pv_controller.go:862] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:06.857593  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0911 18:32:06.857606  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38127
I0911 18:32:06.857643  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:06.857657  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound
I0911 18:32:06.857678  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:32:06.857694  110822 pv_controller.go:619] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I0911 18:32:06.857703  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0911 18:32:06.858028  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
I0911 18:32:06.858051  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
E0911 18:32:06.858299  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:32:06.858334  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I0911 18:32:06.859713  110822 store.go:362] GuaranteedUpdate of /da48a647-be65-4f29-98ad-e6c70c881bf1/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I0911 18:32:06.859909  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (1.939944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:06.859949  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.133345ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:06.860163  110822 pv_controller.go:790] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:32:06.860191  110822 pv_controller.go:940] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:32:06.860198  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.229423ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51104]
I0911 18:32:06.860209  110822 pv_controller_base.go:246] could not sync claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I0911 18:32:06.860362  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38128
I0911 18:32:06.860388  110822 pv_controller.go:798] volume "pv-w-pvc-prebound" entered phase "Bound"
I0911 18:32:06.860453  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38128
I0911 18:32:06.860580  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:06.860603  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound
I0911 18:32:06.860623  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:32:06.860666  110822 pv_controller.go:619] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I0911 18:32:06.860674  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0911 18:32:06.860683  110822 pv_controller.go:780] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0911 18:32:06.860859  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound/status: (1.919209ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51102]
E0911 18:32:06.861157  110822 scheduler.go:333] Error updating the condition of the pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound: Operation cannot be fulfilled on pods "pod-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
E0911 18:32:06.861183  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:32:06.861794  110822 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events/pod-w-pvc-prebound.15c375e3b05a714b: (2.510188ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51106]
I0911 18:32:06.933390  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.683364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.033399  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.730716ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.133281  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.376492ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.233943  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.046799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.333490  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.876697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.433800  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.157219ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.533358  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.638085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.633469  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.75931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.733155  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.496533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.833379  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.525826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:07.933201  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.544739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.033266  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.555785ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.133395  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.678524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.233809  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.089537ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.333700  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.978358ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.433145  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.489317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.533427  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.690753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.558514  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
I0911 18:32:08.558552  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
E0911 18:32:08.558816  110822 factory.go:557] Error scheduling volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound: pod has unbound immediate PersistentVolumeClaims (repeated 2 times); retrying
I0911 18:32:08.558858  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E0911 18:32:08.558874  110822 scheduler.go:559] error selecting node for pod: pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I0911 18:32:08.560853  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.600561ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:08.561889  110822 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events/pod-w-pvc-prebound.15c375e3b0c74e51: (2.479695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:08.633371  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.75486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:08.733866  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.125635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:08.833556  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.767289ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:08.933658  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.827166ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.033528  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.57103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.132965  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.302983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.233312  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.63194ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.333192  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.464799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.433695  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.789934ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.533118  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.383712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.633576  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.853097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.733473  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.650432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.833793  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.094256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:09.933600  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.830105ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.033406  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.710234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.133248  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.595374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.233653  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.925721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.332988  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.289209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.433365  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.682124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.533729  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.732764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.565006  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.602664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.567163  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.737589ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.568675  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.052563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.634482  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.845267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.733745  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.843228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.832972  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.308678ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:10.933147  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.410286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.033062  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.435315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.133380  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.771524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.233286  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.62124ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.333664  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.946504ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.433074  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.340792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.533404  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.68024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.633292  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.566969ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.733793  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.101811ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.833451  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.702531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:11.933002  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.309608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.033657  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.809416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.133265  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.516374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.233356  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.640515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.333344  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.654505ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.433270  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.562009ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.532938  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.222409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.633442  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.630446ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.733168  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.445034ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.833777  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.280437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:12.933923  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.181515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.033636  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.904557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.135198  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (3.504778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.234480  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.333324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.333166  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.500922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.433433  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.732333ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.533704  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.98094ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.633739  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.949789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.733554  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.831949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.835020  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (3.290032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:13.934044  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.327411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.033143  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.483341ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.133438  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.733339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.233171  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.486738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.333337  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.618299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.432933  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.292802ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.533619  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.013835ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.633793  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.059729ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.733920  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.236126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.833096  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.432751ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:14.932987  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.383658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.033169  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.481837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.133249  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.574695ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.233344  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.616236ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.333621  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.805587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.433483  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.764741ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.533344  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.587334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.633065  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.353871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.733068  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.420299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.833072  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.418026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:15.933379  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.656069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.032987  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.244356ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.133453  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.654157ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.232996  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.297028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.333470  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.736279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.433020  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.3723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.533239  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.531999ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.633659  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.998064ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.733390  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.690325ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.832948  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.353433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:16.933089  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.429286ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.033216  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.595195ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.133572  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.85461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.233344  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.627487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.273652  110822 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.384046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.275247  110822 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.164831ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.276841  110822 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.285604ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.333198  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.424929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.433220  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.575819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.533249  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.541117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.633156  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.469376ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.733240  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.532228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.833148  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.436973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:17.933160  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.454837ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.033163  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.397606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.133656  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.988065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.234246  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.534635ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.333181  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.464593ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.433795  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.132972ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.533381  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.727646ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.633873  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.652817ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.733216  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.559966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.833173  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.496809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:18.933598  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.913162ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.033668  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.940319ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.133421  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.738644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.234084  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.13314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.333273  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.584673ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.433592  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.820573ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.533642  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.908606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.633279  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.573708ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.733217  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.554707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.833521  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.795985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:19.933316  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.592644ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.033384  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.680006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.133449  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.774006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.233349  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.584462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.333882  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.968795ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.433639  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.860297ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.533541  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.818789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.564664  110822 httplog.go:90] GET /api/v1/namespaces/default: (1.298474ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.566048  110822 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.006771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.567356  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (941.126µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.634263  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (2.567365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.733757  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.872713ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.833299  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.547546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:20.933445  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.733422ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.033427  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.726066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.133510  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.737071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.233258  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.51161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.333612  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.863098ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.433430  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.644873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.533404  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.712881ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.633349  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.636672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.733355  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.611083ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.833254  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.55195ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.854319  110822 pv_controller_base.go:419] resyncing PV controller
I0911 18:32:21.854414  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38128
I0911 18:32:21.854463  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.854475  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound
I0911 18:32:21.854552  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:32:21.854571  110822 pv_controller.go:619] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I0911 18:32:21.854581  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0911 18:32:21.854590  110822 pv_controller.go:780] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0911 18:32:21.854613  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" with version 35841
I0911 18:32:21.854625  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I0911 18:32:21.854639  110822 pv_controller.go:347] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I0911 18:32:21.854660  110822 pv_controller.go:366] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.854675  110822 pv_controller.go:390] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume already bound, finishing the binding
I0911 18:32:21.854686  110822 pv_controller.go:931] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.854696  110822 pv_controller.go:829] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.854726  110822 pv_controller.go:841] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.854734  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0911 18:32:21.854743  110822 pv_controller.go:780] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0911 18:32:21.854752  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I0911 18:32:21.854768  110822 pv_controller.go:901] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.858384  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-prebound: (2.893494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.859046  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
I0911 18:32:21.859073  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound
I0911 18:32:21.859273  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound" match with Node "node-1"
I0911 18:32:21.859344  110822 scheduler_binder.go:646] PersistentVolume "pv-w-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound": No matching NodeSelectorTerms
I0911 18:32:21.859399  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound", node "node-1"
I0911 18:32:21.859416  110822 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I0911 18:32:21.859463  110822 factory.go:606] Attempting to bind pod-w-pvc-prebound to node-1
I0911 18:32:21.859876  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" with version 40546
I0911 18:32:21.859908  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I0911 18:32:21.859919  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound] status: set phase Bound
I0911 18:32:21.863108  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound/binding: (2.547751ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.863729  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:32:21.865477  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.444982ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.866775  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-prebound/status: (2.638993ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.867016  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" with version 40550
I0911 18:32:21.867044  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" entered phase "Bound"
I0911 18:32:21.867062  110822 pv_controller.go:957] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.867088  110822 pv_controller.go:958] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.867108  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:32:21.867138  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" with version 40550
I0911 18:32:21.867159  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:32:21.867175  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.867199  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: claim is already correctly bound
I0911 18:32:21.867215  110822 pv_controller.go:931] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.867226  110822 pv_controller.go:829] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.867246  110822 pv_controller.go:841] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.867255  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I0911 18:32:21.867263  110822 pv_controller.go:780] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I0911 18:32:21.867272  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I0911 18:32:21.867290  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I0911 18:32:21.867299  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound] status: set phase Bound
I0911 18:32:21.867317  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound] status: phase Bound already set
I0911 18:32:21.867339  110822 pv_controller.go:957] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound"
I0911 18:32:21.867357  110822 pv_controller.go:958] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.867371  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I0911 18:32:21.937915  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-pvc-prebound: (1.656894ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.940568  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-prebound: (2.131308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.942644  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.673794ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.948937  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (5.886165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.956148  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (6.134913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.956691  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" deleted
I0911 18:32:21.956722  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 38128
I0911 18:32:21.956747  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.956755  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound
I0911 18:32:21.958237  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-prebound: (1.317679ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.958470  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound not found
I0911 18:32:21.958537  110822 pv_controller.go:575] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I0911 18:32:21.958551  110822 pv_controller.go:777] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I0911 18:32:21.960922  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.140954ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.961129  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40594
I0911 18:32:21.961157  110822 pv_controller.go:798] volume "pv-w-pvc-prebound" entered phase "Released"
I0911 18:32:21.961167  110822 pv_controller.go:1011] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I0911 18:32:21.961357  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 40594
I0911 18:32:21.961389  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound (uid: 6ca1b080-dfef-433c-8b57-09666d965360)", boundByController: true
I0911 18:32:21.961399  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound
I0911 18:32:21.961413  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound not found
I0911 18:32:21.961419  110822 pv_controller.go:1011] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I0911 18:32:21.962836  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.082158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.963241  110822 pv_controller_base.go:212] volume "pv-w-pvc-prebound" deleted
I0911 18:32:21.963283  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-prebound" was already processed
I0911 18:32:21.971811  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.445698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.971992  110822 volume_binding_test.go:195] Running test wait can bind two
I0911 18:32:21.973600  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.423759ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.975956  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.859397ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.977990  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.68355ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.978867  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind-2", version 40604
I0911 18:32:21.978895  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I0911 18:32:21.978923  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I0911 18:32:21.978935  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I0911 18:32:21.980371  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.741164ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.981801  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.60028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.982412  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 40606
I0911 18:32:21.982440  110822 pv_controller.go:798] volume "pv-w-canbind-2" entered phase "Available"
I0911 18:32:21.982466  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind-3", version 40605
I0911 18:32:21.982483  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I0911 18:32:21.982528  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I0911 18:32:21.982536  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I0911 18:32:21.983258  110822 httplog.go:90] POST /api/v1/persistentvolumes: (2.185432ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.984556  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.853211ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.984746  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 40610
I0911 18:32:21.984768  110822 pv_controller.go:798] volume "pv-w-canbind-3" entered phase "Available"
I0911 18:32:21.984791  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 40606
I0911 18:32:21.984819  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I0911 18:32:21.984841  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I0911 18:32:21.984848  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I0911 18:32:21.984857  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I0911 18:32:21.984873  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-canbind-5", version 40608
I0911 18:32:21.984885  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I0911 18:32:21.984907  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I0911 18:32:21.984912  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I0911 18:32:21.985686  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (1.874019ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.985893  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2", version 40612
I0911 18:32:21.986146  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:21.986234  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: no volume found
I0911 18:32:21.986312  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2] status: set phase Pending
I0911 18:32:21.986357  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2] status: phase Pending already set
I0911 18:32:21.986620  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-w-canbind-2", UID:"9da370ce-2064-4da9-9614-35099c068b7a", APIVersion:"v1", ResourceVersion:"40612", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 18:32:21.987214  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (1.731294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:21.987451  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-5" with version 40613
I0911 18:32:21.987474  110822 pv_controller.go:798] volume "pv-w-canbind-5" entered phase "Available"
I0911 18:32:21.987549  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 40610
I0911 18:32:21.987576  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I0911 18:32:21.987602  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I0911 18:32:21.987609  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I0911 18:32:21.987618  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I0911 18:32:21.987635  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-5" with version 40613
I0911 18:32:21.987658  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I0911 18:32:21.987676  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I0911 18:32:21.987682  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I0911 18:32:21.987690  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I0911 18:32:21.989712  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.529208ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48950]
I0911 18:32:21.997219  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3", version 40617
I0911 18:32:21.997258  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:21.997292  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: no volume found
I0911 18:32:21.997314  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3] status: set phase Pending
I0911 18:32:21.997338  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3] status: phase Pending already set
I0911 18:32:21.997646  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-w-canbind-3", UID:"8700aa0e-26fb-434e-ac49-c4f496afb0e8", APIVersion:"v1", ResourceVersion:"40617", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I0911 18:32:21.997970  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (9.230116ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0911 18:32:22.002849  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (4.944666ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.002905  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (4.536578ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0911 18:32:22.003473  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2
I0911 18:32:22.003509  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2
I0911 18:32:22.003803  110822 scheduler_binder.go:692] Found matching volumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2" on node "node-2"
I0911 18:32:22.003957  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" on node "node-1"
I0911 18:32:22.003979  110822 scheduler_binder.go:718] storage class "wait-z6p4" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" does not support dynamic provisioning
I0911 18:32:22.004142  110822 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2", node "node-2"
I0911 18:32:22.004243  110822 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-canbind-3", version 40610
I0911 18:32:22.004312  110822 scheduler_assume_cache.go:320] Assumed v1.PersistentVolume "pv-w-canbind-2", version 40606
I0911 18:32:22.004466  110822 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2", node "node-2"
I0911 18:32:22.004548  110822 scheduler_binder.go:400] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" bound to volume "pv-w-canbind-3"
I0911 18:32:22.008048  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 40628
I0911 18:32:22.008167  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:22.008222  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2
I0911 18:32:22.008280  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:22.008338  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:32:22.008430  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" with version 40612
I0911 18:32:22.008510  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:22.008601  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: volume "pv-w-canbind-3" found: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:22.008648  110822 pv_controller.go:931] binding volume "pv-w-canbind-3" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.008690  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.008735  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.008803  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I0911 18:32:22.011040  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3: (6.051321ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.011332  110822 scheduler_binder.go:406] updating PersistentVolume[pv-w-canbind-3]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.011351  110822 scheduler_binder.go:400] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" bound to volume "pv-w-canbind-2"
I0911 18:32:22.011849  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (2.620114ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:22.012148  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 40631
I0911 18:32:22.012181  110822 pv_controller.go:798] volume "pv-w-canbind-3" entered phase "Bound"
I0911 18:32:22.012209  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: binding to "pv-w-canbind-3"
I0911 18:32:22.012224  110822 pv_controller.go:901] volume "pv-w-canbind-3" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.012408  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 40631
I0911 18:32:22.012455  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:22.012473  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2
I0911 18:32:22.012512  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:22.012527  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:32:22.014483  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-2: (1.884131ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:22.014685  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" with version 40635
I0911 18:32:22.014704  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: bound to "pv-w-canbind-3"
I0911 18:32:22.014711  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2] status: set phase Bound
I0911 18:32:22.016275  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-2/status: (1.419191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:22.016634  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" with version 40636
I0911 18:32:22.016662  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" entered phase "Bound"
I0911 18:32:22.016680  110822 pv_controller.go:957] volume "pv-w-canbind-3" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.016705  110822 pv_controller.go:958] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:22.016717  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2: (2.892994ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.016722  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I0911 18:32:22.016753  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" with version 40636
I0911 18:32:22.016767  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I0911 18:32:22.016785  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: volume "pv-w-canbind-3" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:22.016802  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: claim is already correctly bound
I0911 18:32:22.016811  110822 pv_controller.go:931] binding volume "pv-w-canbind-3" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.016822  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.016839  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.016848  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I0911 18:32:22.016857  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-3]: phase Bound already set
I0911 18:32:22.016867  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: binding to "pv-w-canbind-3"
I0911 18:32:22.016885  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2]: already bound to "pv-w-canbind-3"
I0911 18:32:22.016894  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2] status: set phase Bound
I0911 18:32:22.016914  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2] status: phase Bound already set
I0911 18:32:22.016927  110822 pv_controller.go:957] volume "pv-w-canbind-3" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2"
I0911 18:32:22.016950  110822 pv_controller.go:958] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:22.016964  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I0911 18:32:22.017173  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 40637
I0911 18:32:22.017193  110822 scheduler_binder.go:406] updating PersistentVolume[pv-w-canbind-2]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.017196  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:22.017213  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3
I0911 18:32:22.017224  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:22.017238  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:32:22.017258  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" with version 40617
I0911 18:32:22.017267  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:22.017286  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: volume "pv-w-canbind-2" found: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:22.017293  110822 pv_controller.go:931] binding volume "pv-w-canbind-2" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.017301  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.017310  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.017316  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I0911 18:32:22.019553  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.011799ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.019787  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 40638
I0911 18:32:22.019816  110822 pv_controller.go:798] volume "pv-w-canbind-2" entered phase "Bound"
I0911 18:32:22.019830  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: binding to "pv-w-canbind-2"
I0911 18:32:22.019844  110822 pv_controller.go:901] volume "pv-w-canbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.020059  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 40638
I0911 18:32:22.020242  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:22.020357  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3
I0911 18:32:22.020544  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:22.020643  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:32:22.022994  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-3: (2.965934ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.023891  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" with version 40641
I0911 18:32:22.023931  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: bound to "pv-w-canbind-2"
I0911 18:32:22.023942  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3] status: set phase Bound
I0911 18:32:22.026125  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-3/status: (1.982543ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.026721  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" with version 40643
I0911 18:32:22.026750  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" entered phase "Bound"
I0911 18:32:22.026767  110822 pv_controller.go:957] volume "pv-w-canbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.026790  110822 pv_controller.go:958] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:22.026805  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I0911 18:32:22.026838  110822 pv_controller_base.go:526] storeObjectUpdate: ignoring claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" version 40641
I0911 18:32:22.027004  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" with version 40643
I0911 18:32:22.027021  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I0911 18:32:22.027039  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: volume "pv-w-canbind-2" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:22.027049  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: claim is already correctly bound
I0911 18:32:22.027058  110822 pv_controller.go:931] binding volume "pv-w-canbind-2" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.027068  110822 pv_controller.go:829] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.027089  110822 pv_controller.go:841] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.027098  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I0911 18:32:22.027106  110822 pv_controller.go:780] updating PersistentVolume[pv-w-canbind-2]: phase Bound already set
I0911 18:32:22.027115  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: binding to "pv-w-canbind-2"
I0911 18:32:22.027130  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3]: already bound to "pv-w-canbind-2"
I0911 18:32:22.027138  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3] status: set phase Bound
I0911 18:32:22.027164  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3] status: phase Bound already set
I0911 18:32:22.027176  110822 pv_controller.go:957] volume "pv-w-canbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3"
I0911 18:32:22.027198  110822 pv_controller.go:958] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:22.027212  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I0911 18:32:22.106637  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (3.093218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.205166  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.498895ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.305429  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.876196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.405168  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.508222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.505811  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (2.097057ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.560601  110822 cache.go:669] Couldn't expire cache for pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2. Binding is still in progress.
I0911 18:32:22.607388  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (3.707209ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.705668  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.951032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.805412  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.828556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:22.905477  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.812225ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.004924  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.329748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.017446  110822 scheduler_binder.go:546] All PVCs for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2" are bound
I0911 18:32:23.017514  110822 factory.go:606] Attempting to bind pod-w-canbind-2 to node-2
I0911 18:32:23.020108  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2/binding: (2.268687ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.020694  110822 scheduler.go:667] pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-canbind-2 is bound successfully on node "node-2", 2 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<50>|StorageEphemeral<0>.".
I0911 18:32:23.022659  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.741ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.105241  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-canbind-2: (1.548597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.107148  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-2: (1.334804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.108421  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-3: (971.904µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.109606  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-2: (917.751µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.111397  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-3: (1.522051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.113124  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.361021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.119253  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (5.553941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.124750  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" deleted
I0911 18:32:23.124819  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 40631
I0911 18:32:23.124869  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:23.124882  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2
I0911 18:32:23.126451  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-2: (1.063477ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.126721  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 not found
I0911 18:32:23.126749  110822 pv_controller.go:575] volume "pv-w-canbind-3" is released and reclaim policy "Retain" will be executed
I0911 18:32:23.126760  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-3]: set phase Released
I0911 18:32:23.127894  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (8.095164ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.128031  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" deleted
I0911 18:32:23.129006  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.975945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.129266  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 41268
I0911 18:32:23.129293  110822 pv_controller.go:798] volume "pv-w-canbind-3" entered phase "Released"
I0911 18:32:23.129304  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I0911 18:32:23.129328  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 40638
I0911 18:32:23.129352  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:23.129369  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3
I0911 18:32:23.130655  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-canbind-3: (1.153441ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.130882  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 not found
I0911 18:32:23.131339  110822 pv_controller.go:575] volume "pv-w-canbind-2" is released and reclaim policy "Retain" will be executed
I0911 18:32:23.132643  110822 pv_controller.go:777] updating PersistentVolume[pv-w-canbind-2]: set phase Released
I0911 18:32:23.134783  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.844317ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.134986  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 41270
I0911 18:32:23.135034  110822 pv_controller.go:798] volume "pv-w-canbind-2" entered phase "Released"
I0911 18:32:23.135044  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I0911 18:32:23.135069  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-3" with version 41268
I0911 18:32:23.135093  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 (uid: 9da370ce-2064-4da9-9614-35099c068b7a)", boundByController: true
I0911 18:32:23.135119  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2
I0911 18:32:23.135138  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2 not found
I0911 18:32:23.135145  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I0911 18:32:23.135157  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-canbind-2" with version 41270
I0911 18:32:23.135175  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 (uid: 8700aa0e-26fb-434e-ac49-c4f496afb0e8)", boundByController: true
I0911 18:32:23.135184  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3
I0911 18:32:23.135200  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3 not found
I0911 18:32:23.135206  110822 pv_controller.go:1011] reclaimVolume[pv-w-canbind-2]: policy is Retain, nothing to do
I0911 18:32:23.136876  110822 pv_controller_base.go:212] volume "pv-w-canbind-2" deleted
I0911 18:32:23.136920  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-3" was already processed
I0911 18:32:23.139796  110822 pv_controller_base.go:212] volume "pv-w-canbind-3" deleted
I0911 18:32:23.139846  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-canbind-2" was already processed
I0911 18:32:23.141900  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (13.560191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.142157  110822 pv_controller_base.go:212] volume "pv-w-canbind-5" deleted
I0911 18:32:23.148443  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (5.907925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.148777  110822 volume_binding_test.go:195] Running test wait cannot bind two
I0911 18:32:23.150290  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.341773ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.151979  110822 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.351759ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.154041  110822 httplog.go:90] POST /api/v1/persistentvolumes: (1.80682ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.154135  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 41284
I0911 18:32:23.154179  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I0911 18:32:23.154205  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I0911 18:32:23.154213  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I0911 18:32:23.158057  110822 httplog.go:90] POST /api/v1/persistentvolumes: (3.349708ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.159476  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (5.084987ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.159698  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 41286
I0911 18:32:23.159727  110822 pv_controller.go:798] volume "pv-w-cannotbind-1" entered phase "Available"
I0911 18:32:23.159749  110822 pv_controller_base.go:502] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 41285
I0911 18:32:23.159762  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I0911 18:32:23.159777  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I0911 18:32:23.159781  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I0911 18:32:23.162356  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (2.428783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.162637  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41290
I0911 18:32:23.162667  110822 pv_controller.go:798] volume "pv-w-cannotbind-2" entered phase "Available"
I0911 18:32:23.162694  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 41286
I0911 18:32:23.162710  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I0911 18:32:23.163046  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I0911 18:32:23.163061  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I0911 18:32:23.163070  110822 pv_controller.go:780] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I0911 18:32:23.163093  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41290
I0911 18:32:23.163110  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I0911 18:32:23.163132  110822 pv_controller.go:494] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I0911 18:32:23.163139  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I0911 18:32:23.163147  110822 pv_controller.go:780] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I0911 18:32:23.165427  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1", version 41288
I0911 18:32:23.165445  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:23.165471  110822 pv_controller.go:328] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: volume "pv-w-cannotbind-2" found: phase: Available, bound to: "", boundByController: false
I0911 18:32:23.165479  110822 pv_controller.go:931] binding volume "pv-w-cannotbind-2" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.165489  110822 pv_controller.go:829] updating PersistentVolume[pv-w-cannotbind-2]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.165527  110822 pv_controller.go:849] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" bound to volume "pv-w-cannotbind-2"
I0911 18:32:23.165903  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (7.279657ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.168232  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2: (2.530097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.168390  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41292
I0911 18:32:23.168403  110822 pv_controller.go:862] updating PersistentVolume[pv-w-cannotbind-2]: bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.168411  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-2]: set phase Bound
I0911 18:32:23.168544  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41292
I0911 18:32:23.168576  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.168590  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1
I0911 18:32:23.168610  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-cannotbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:23.168691  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:32:23.170183  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (1.621476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.170414  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41295
I0911 18:32:23.170446  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.170464  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1
I0911 18:32:23.170482  110822 pv_controller.go:555] synchronizing PersistentVolume[pv-w-cannotbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:23.170512  110822 pv_controller.go:603] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume not bound yet, waiting for syncClaim to fix it
I0911 18:32:23.170591  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41295
I0911 18:32:23.170623  110822 pv_controller.go:798] volume "pv-w-cannotbind-2" entered phase "Bound"
I0911 18:32:23.170636  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: binding to "pv-w-cannotbind-2"
I0911 18:32:23.170652  110822 pv_controller.go:901] volume "pv-w-cannotbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.171736  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (4.494194ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.174041  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (1.870368ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.174224  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.174329  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.174622  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" on node "node-1"
I0911 18:32:23.174650  110822 scheduler_binder.go:718] storage class "wait-cljr" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" does not support dynamic provisioning
I0911 18:32:23.174709  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" on node "node-2"
I0911 18:32:23.174734  110822 scheduler_binder.go:718] storage class "wait-cljr" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" does not support dynamic provisioning
I0911 18:32:23.174780  110822 factory.go:541] Unable to schedule volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I0911 18:32:23.174817  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I0911 18:32:23.177346  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-cannotbind-1: (6.439721ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.177588  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" with version 41298
I0911 18:32:23.177698  110822 pv_controller.go:912] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: bound to "pv-w-cannotbind-2"
I0911 18:32:23.177790  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1] status: set phase Bound
I0911 18:32:23.180803  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-cannotbind-1/status: (2.715479ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.181075  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" with version 41303
I0911 18:32:23.181100  110822 pv_controller.go:742] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" entered phase "Bound"
I0911 18:32:23.181119  110822 pv_controller.go:957] volume "pv-w-cannotbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.181144  110822 pv_controller.go:958] volume "pv-w-cannotbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.181159  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" status after binding: phase: Bound, bound to: "pv-w-cannotbind-2", bindCompleted: true, boundByController: true
I0911 18:32:23.181197  110822 pv_controller_base.go:502] storeObjectUpdate: adding claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2", version 41294
I0911 18:32:23.181211  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:32:23.181258  110822 pv_controller.go:303] synchronizing unbound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2]: no volume found
I0911 18:32:23.181277  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2] status: set phase Pending
I0911 18:32:23.181292  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2] status: phase Pending already set
I0911 18:32:23.181309  110822 pv_controller_base.go:526] storeObjectUpdate: ignoring claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" version 41298
I0911 18:32:23.181462  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (4.628061ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53178]
I0911 18:32:23.181484  110822 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2", Name:"pvc-w-cannotbind-2", UID:"baad570d-614f-4c4b-b2c0-d655c3747731", APIVersion:"v1", ResourceVersion:"41294", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
E0911 18:32:23.181924  110822 factory.go:581] pod is already present in the activeQ
I0911 18:32:23.182221  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (5.694302ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.183414  110822 pv_controller_base.go:530] storeObjectUpdate updating claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" with version 41303
I0911 18:32:23.183563  110822 pv_controller.go:237] synchronizing PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: phase: Bound, bound to: "pv-w-cannotbind-2", bindCompleted: true, boundByController: true
I0911 18:32:23.183661  110822 pv_controller.go:449] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: volume "pv-w-cannotbind-2" found: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.183731  110822 pv_controller.go:466] synchronizing bound PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: claim is already correctly bound
I0911 18:32:23.184755  110822 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (2.815807ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0911 18:32:23.183700  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2/status: (8.249228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49016]
I0911 18:32:23.188783  110822 pv_controller.go:931] binding volume "pv-w-cannotbind-2" to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.188849  110822 pv_controller.go:829] updating PersistentVolume[pv-w-cannotbind-2]: binding to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.188911  110822 pv_controller.go:841] updating PersistentVolume[pv-w-cannotbind-2]: already bound to "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.188960  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-2]: set phase Bound
I0911 18:32:23.189008  110822 pv_controller.go:780] updating PersistentVolume[pv-w-cannotbind-2]: phase Bound already set
I0911 18:32:23.189222  110822 pv_controller.go:869] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: binding to "pv-w-cannotbind-2"
I0911 18:32:23.189308  110822 pv_controller.go:916] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1]: already bound to "pv-w-cannotbind-2"
I0911 18:32:23.189362  110822 pv_controller.go:683] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1] status: set phase Bound
I0911 18:32:23.189458  110822 pv_controller.go:728] updating PersistentVolumeClaim[volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1] status: phase Bound already set
I0911 18:32:23.189525  110822 pv_controller.go:957] volume "pv-w-cannotbind-2" bound to claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1"
I0911 18:32:23.189612  110822 pv_controller.go:958] volume "pv-w-cannotbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.189710  110822 pv_controller.go:959] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" status after binding: phase: Bound, bound to: "pv-w-cannotbind-2", bindCompleted: true, boundByController: true
I0911 18:32:23.191579  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (2.459266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.191929  110822 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2 on any node.
I0911 18:32:23.192054  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.192073  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.192318  110822 scheduler_binder.go:646] PersistentVolume "pv-w-cannotbind-2", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2": No matching NodeSelectorTerms
I0911 18:32:23.192329  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2" match with Node "node-1"
I0911 18:32:23.192374  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" on node "node-1"
I0911 18:32:23.192381  110822 scheduler_binder.go:692] Found matching volumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2" on node "node-2"
I0911 18:32:23.192386  110822 scheduler_binder.go:718] storage class "wait-cljr" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" does not support dynamic provisioning
I0911 18:32:23.192450  110822 factory.go:541] Unable to schedule volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had volume node affinity conflict.; waiting
I0911 18:32:23.192514  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I0911 18:32:23.195752  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.92892ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.195786  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (2.92337ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.196192  110822 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2/status: (2.707908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53178]
E0911 18:32:23.197145  110822 factory.go:581] pod is already present in the activeQ
I0911 18:32:23.197873  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (1.16966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53178]
I0911 18:32:23.198114  110822 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2 on any node.
I0911 18:32:23.198219  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.198239  110822 scheduler.go:530] Attempting to schedule pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.198388  110822 scheduler_binder.go:652] All bound volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2" match with Node "node-1"
I0911 18:32:23.198427  110822 scheduler_binder.go:679] No matching volumes for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2", PVC "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" on node "node-1"
I0911 18:32:23.198441  110822 scheduler_binder.go:718] storage class "wait-cljr" of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" does not support dynamic provisioning
I0911 18:32:23.198561  110822 scheduler_binder.go:646] PersistentVolume "pv-w-cannotbind-2", Node "node-2" mismatch for Pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2": No matching NodeSelectorTerms
I0911 18:32:23.198596  110822 scheduler_binder.go:692] Found matching volumes for pod "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2" on node "node-2"
I0911 18:32:23.198631  110822 factory.go:541] Unable to schedule volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had volume node affinity conflict.; waiting
I0911 18:32:23.198655  110822 factory.go:615] Updating pod condition for volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I0911 18:32:23.202722  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (3.623484ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.202730  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (3.664879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.202923  110822 generic_scheduler.go:337] Preemption will not help schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2 on any node.
I0911 18:32:23.207389  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (8.082559ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0911 18:32:23.276738  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods/pod-w-cannotbind-2: (1.953903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.278928  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-cannotbind-1: (1.795548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.280941  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-cannotbind-2: (1.66747ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.282205  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (1.009805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.283377  110822 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (882.415µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.287907  110822 scheduling_queue.go:830] About to try and schedule pod volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.288004  110822 scheduler.go:526] Skip schedule deleting pod: volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-w-cannotbind-2
I0911 18:32:23.291025  110822 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/events: (1.654614ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.292129  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (8.357927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.296452  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" deleted
I0911 18:32:23.296482  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41295
I0911 18:32:23.296535  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Bound, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.296544  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1
I0911 18:32:23.298072  110822 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims/pvc-w-cannotbind-1: (1.383511ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.298452  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-cannotbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 not found
I0911 18:32:23.298540  110822 pv_controller.go:575] volume "pv-w-cannotbind-2" is released and reclaim policy "Retain" will be executed
I0911 18:32:23.298578  110822 pv_controller.go:777] updating PersistentVolume[pv-w-cannotbind-2]: set phase Released
I0911 18:32:23.303136  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (10.571748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.303981  110822 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (5.167653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53180]
I0911 18:32:23.304221  110822 pv_controller_base.go:258] claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-2" deleted
I0911 18:32:23.304554  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41353
I0911 18:32:23.304580  110822 pv_controller.go:798] volume "pv-w-cannotbind-2" entered phase "Released"
I0911 18:32:23.304592  110822 pv_controller.go:1011] reclaimVolume[pv-w-cannotbind-2]: policy is Retain, nothing to do
I0911 18:32:23.304616  110822 pv_controller_base.go:530] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 41353
I0911 18:32:23.304652  110822 pv_controller.go:489] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Released, bound to: "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 (uid: 82c59fb1-d4b3-4435-b489-117fadb7faeb)", boundByController: true
I0911 18:32:23.304673  110822 pv_controller.go:514] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is bound to claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1
I0911 18:32:23.304694  110822 pv_controller.go:547] synchronizing PersistentVolume[pv-w-cannotbind-2]: claim volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 not found
I0911 18:32:23.304702  110822 pv_controller.go:1011] reclaimVolume[pv-w-cannotbind-2]: policy is Retain, nothing to do
I0911 18:32:23.310803  110822 pv_controller_base.go:212] volume "pv-w-cannotbind-1" deleted
I0911 18:32:23.315335  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.783843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.317674  110822 pv_controller_base.go:212] volume "pv-w-cannotbind-2" deleted
I0911 18:32:23.317710  110822 pv_controller_base.go:396] deletion of claim "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1" was already processed
I0911 18:32:23.330136  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (14.226191ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.330457  110822 volume_binding_test.go:932] test cluster "volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2" start to tear down
I0911 18:32:23.332269  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pods: (1.509241ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.333723  110822 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/persistentvolumeclaims: (1.052332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.335417  110822 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.072464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.336842  110822 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (938.273µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.337275  110822 pv_controller_base.go:298] Shutting down persistent volume controller
I0911 18:32:23.337303  110822 pv_controller_base.go:409] claim worker queue shutting down
I0911 18:32:23.337382  110822 pv_controller_base.go:352] volume worker queue shutting down
I0911 18:32:23.337911  110822 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31989&timeout=5m40s&timeoutSeconds=340&watch=true: (1m2.779297162s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0911 18:32:23.338132  110822 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=31990&timeout=7m22s&timeoutSeconds=442&watch=true: (1m1.582355386s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42206]
I0911 18:32:23.338290  110822 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31989&timeout=6m44s&timeoutSeconds=404&watch=true: (1m1.582308527s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42210]
I0911 18:32:23.338417  110822 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=31990&timeout=8m33s&timeoutSeconds=513&watch=true: (1m2.76578875s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42048]
I0911 18:32:23.338713  110822 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=31989&timeout=9m45s&timeoutSeconds=585&watch=true: (1m2.77635602s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0911 18:32:23.338915  110822 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=31991&timeout=8m12s&timeoutSeconds=492&watch=true: (1m2.777107307s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41662]
I0911 18:32:23.339059  110822 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=31989&timeout=7m2s&timeoutSeconds=422&watch=true: (1m1.583274286s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42200]
I0911 18:32:23.339267  110822 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=31989&timeout=7m28s&timeoutSeconds=448&watch=true: (1m2.76967284s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0911 18:32:23.339444  110822 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=31994&timeout=5m54s&timeoutSeconds=354&watch=true: (1m2.769565223s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42042]
I0911 18:32:23.339588  110822 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=31989&timeout=5m15s&timeoutSeconds=315&watch=true: (1m1.58403275s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0911 18:32:23.339812  110822 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=31995&timeout=7m2s&timeoutSeconds=422&watch=true: (1m2.755030962s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0911 18:32:23.339957  110822 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=31995&timeout=9m42s&timeoutSeconds=582&watch=true: (1m2.77364679s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0911 18:32:23.340074  110822 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=31990&timeout=9m16s&timeoutSeconds=556&watch=true: (1m2.767759241s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0911 18:32:23.340352  110822 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=31995&timeout=6m22s&timeoutSeconds=382&watch=true: (1m2.767561011s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42050]
I0911 18:32:23.340416  110822 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=31995&timeout=8m33s&timeoutSeconds=513&watch=true: (1m1.584792237s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42202]
I0911 18:32:23.340634  110822 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=31990&timeout=9m2s&timeoutSeconds=542&watch=true: (1m2.781897632s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0911 18:32:23.349912  110822 httplog.go:90] DELETE /api/v1/nodes: (9.262459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.350171  110822 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0911 18:32:23.351899  110822 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.54085ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0911 18:32:23.354602  110822 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.039005ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
W0911 18:32:23.355298  110822 feature_gate.go:208] Setting GA feature gate PersistentLocalVolumes=true. It will be removed in a future release.
I0911 18:32:23.355314  110822 feature_gate.go:216] feature gates: &{map[PersistentLocalVolumes:true]}
--- FAIL: TestVolumeBinding (66.37s)
    volume_binding_test.go:1143: PVC volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pvc-w-cannotbind-1 phase not Pending, got Bound
    volume_binding_test.go:1191: PV pv-w-cannotbind-2 phase not Available, got Bound

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190911-182318.xml

Find volume-scheduling-567ddab1-1f1a-4f91-ae4d-5c10ed9e62f2/pod-i-canbind mentions in log files | View test history on testgrid


Show 2862 Passed Tests