This job view page is being replaced by Spyglass soon. Check out the new job view.
PRxing-yang: Enable VolumeSnapshotDataSource Feature Gate and update e2e tests for VolumeSnapshot CRD v1beta1
ResultFAILURE
Tests 1 failed / 2898 succeeded
Started2019-11-09 03:54
Elapsed24m59s
Revision356a73c6ad15070cca66e307c49082ea544e480c
Refs 80058

Test Failures


k8s.io/kubernetes/test/integration/volumescheduling TestVolumeBinding 1m6s

go test -v k8s.io/kubernetes/test/integration/volumescheduling -run TestVolumeBinding$
=== RUN   TestVolumeBinding
W1109 04:15:59.998132  112476 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1109 04:15:59.998269  112476 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I1109 04:15:59.998293  112476 master.go:311] Node port range unspecified. Defaulting to 30000-32767.
I1109 04:15:59.998308  112476 master.go:267] Using reconciler: 
I1109 04:16:00.000792  112476 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.001265  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.001314  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.002562  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.002597  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.006096  112476 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1109 04:16:00.006194  112476 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.006603  112476 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1109 04:16:00.007396  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.007544  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.009213  112476 store.go:1342] Monitoring events count at <storage-prefix>//events
I1109 04:16:00.009290  112476 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.009312  112476 watch_cache.go:409] Replace watchCache (rev: 30879) 
I1109 04:16:00.013453  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.013528  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.013691  112476 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1109 04:16:00.015493  112476 watch_cache.go:409] Replace watchCache (rev: 30880) 
I1109 04:16:00.020326  112476 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1109 04:16:00.020587  112476 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1109 04:16:00.020551  112476 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.021151  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.021298  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.023860  112476 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1109 04:16:00.024285  112476 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.024941  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.025111  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.025153  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.024072  112476 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1109 04:16:00.026790  112476 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1109 04:16:00.027014  112476 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.027321  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.027354  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.027619  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.027699  112476 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1109 04:16:00.028784  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.029102  112476 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1109 04:16:00.029218  112476 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1109 04:16:00.029499  112476 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.029779  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.029871  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.029959  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.030806  112476 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1109 04:16:00.031046  112476 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1109 04:16:00.031171  112476 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.031323  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.031365  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.032242  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.032878  112476 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1109 04:16:00.033002  112476 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1109 04:16:00.033611  112476 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.033899  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.033929  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.034123  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.035068  112476 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1109 04:16:00.035269  112476 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.035402  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.035446  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.035545  112476 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1109 04:16:00.036840  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.037328  112476 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1109 04:16:00.037549  112476 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.037813  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.037928  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.037590  112476 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1109 04:16:00.039162  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.040122  112476 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1109 04:16:00.040184  112476 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I1109 04:16:00.040390  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.040614  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.040648  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.045216  112476 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1109 04:16:00.045499  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.045587  112476 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.046336  112476 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I1109 04:16:00.047327  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.048869  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.048927  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.049955  112476 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1109 04:16:00.050403  112476 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1109 04:16:00.051167  112476 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.051577  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.051721  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.053142  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.053851  112476 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1109 04:16:00.053927  112476 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1109 04:16:00.054074  112476 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.054764  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.054808  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.056358  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.056393  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.057157  112476 watch_cache.go:409] Replace watchCache (rev: 30884) 
I1109 04:16:00.057972  112476 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.058134  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.058162  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.061576  112476 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1109 04:16:00.061708  112476 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1109 04:16:00.061722  112476 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1109 04:16:00.062493  112476 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.062856  112476 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.063567  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.063661  112476 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.064458  112476 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.065656  112476 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.066460  112476 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.067010  112476 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.067173  112476 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.067546  112476 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.068212  112476 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.069180  112476 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.069569  112476 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.070539  112476 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.070954  112476 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.071727  112476 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.072146  112476 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.073119  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.073486  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.073754  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.073995  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.074309  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.074689  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.075091  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.076458  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.077016  112476 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.078451  112476 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.079763  112476 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.080220  112476 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.080717  112476 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.081838  112476 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.082549  112476 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.083517  112476 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.084429  112476 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.085332  112476 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.086459  112476 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.086946  112476 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.087214  112476 master.go:496] Skipping disabled API group "auditregistration.k8s.io".
I1109 04:16:00.087311  112476 master.go:507] Enabling API group "authentication.k8s.io".
I1109 04:16:00.087391  112476 master.go:507] Enabling API group "authorization.k8s.io".
I1109 04:16:00.087703  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.088013  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.088118  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.089495  112476 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 04:16:00.089591  112476 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 04:16:00.090010  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.090371  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.090503  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.091227  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.091817  112476 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 04:16:00.091891  112476 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 04:16:00.092150  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.092467  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.092499  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.093195  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.093538  112476 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 04:16:00.093575  112476 master.go:507] Enabling API group "autoscaling".
I1109 04:16:00.093718  112476 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 04:16:00.093862  112476 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.094079  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.094116  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.094681  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.095248  112476 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1109 04:16:00.095321  112476 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1109 04:16:00.095496  112476 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.095672  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.095701  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.096141  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.096503  112476 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1109 04:16:00.096542  112476 master.go:507] Enabling API group "batch".
I1109 04:16:00.096600  112476 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1109 04:16:00.096756  112476 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.096914  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.096934  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.097679  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.098144  112476 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1109 04:16:00.098260  112476 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1109 04:16:00.098302  112476 master.go:507] Enabling API group "certificates.k8s.io".
I1109 04:16:00.098958  112476 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.099172  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.099278  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.099570  112476 watch_cache.go:409] Replace watchCache (rev: 30885) 
I1109 04:16:00.101656  112476 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1109 04:16:00.101798  112476 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1109 04:16:00.101949  112476 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.102150  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.102172  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.103564  112476 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1109 04:16:00.103618  112476 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1109 04:16:00.103634  112476 master.go:507] Enabling API group "coordination.k8s.io".
I1109 04:16:00.103657  112476 master.go:496] Skipping disabled API group "discovery.k8s.io".
I1109 04:16:00.103956  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.104127  112476 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.104925  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.104971  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.105294  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.108709  112476 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1109 04:16:00.108752  112476 master.go:507] Enabling API group "extensions".
I1109 04:16:00.108775  112476 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1109 04:16:00.109128  112476 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.109437  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.109489  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.110517  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.110705  112476 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1109 04:16:00.110928  112476 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1109 04:16:00.111016  112476 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.111233  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.111267  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.111944  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.112313  112476 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1109 04:16:00.112340  112476 master.go:507] Enabling API group "networking.k8s.io".
I1109 04:16:00.112475  112476 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1109 04:16:00.112490  112476 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.112741  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.112784  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.114547  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.114807  112476 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1109 04:16:00.114727  112476 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1109 04:16:00.114905  112476 master.go:507] Enabling API group "node.k8s.io".
I1109 04:16:00.115452  112476 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.115705  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.115743  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.116143  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.117498  112476 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1109 04:16:00.117615  112476 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1109 04:16:00.118288  112476 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.118658  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.119045  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.119274  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.121072  112476 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1109 04:16:00.121129  112476 master.go:507] Enabling API group "policy".
I1109 04:16:00.121255  112476 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1109 04:16:00.121238  112476 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.121482  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.121545  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.122145  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.122883  112476 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1109 04:16:00.122956  112476 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1109 04:16:00.123141  112476 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.123447  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.123480  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.123949  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.124236  112476 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1109 04:16:00.124299  112476 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.124424  112476 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1109 04:16:00.124453  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.124474  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.125713  112476 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1109 04:16:00.125798  112476 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1109 04:16:00.125938  112476 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.126316  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.126522  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.126943  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.127313  112476 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1109 04:16:00.127472  112476 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.127592  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.127630  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.127644  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.127733  112476 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1109 04:16:00.128659  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.128975  112476 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1109 04:16:00.129018  112476 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1109 04:16:00.129875  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.129970  112476 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.130593  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.130717  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.131516  112476 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1109 04:16:00.131564  112476 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1109 04:16:00.131585  112476 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.131776  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.131806  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.132768  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.133051  112476 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1109 04:16:00.133247  112476 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.133380  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.133400  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.133516  112476 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1109 04:16:00.134462  112476 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1109 04:16:00.134506  112476 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1109 04:16:00.134678  112476 master.go:507] Enabling API group "rbac.authorization.k8s.io".
I1109 04:16:00.134955  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.135247  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.137077  112476 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.137306  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.137350  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.138222  112476 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1109 04:16:00.138548  112476 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1109 04:16:00.138668  112476 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.138900  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.138982  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.139899  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.140183  112476 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1109 04:16:00.140279  112476 master.go:507] Enabling API group "scheduling.k8s.io".
I1109 04:16:00.140350  112476 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1109 04:16:00.141623  112476 watch_cache.go:409] Replace watchCache (rev: 30886) 
I1109 04:16:00.141823  112476 master.go:496] Skipping disabled API group "settings.k8s.io".
I1109 04:16:00.142352  112476 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.142578  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.142610  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.143711  112476 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1109 04:16:00.143850  112476 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1109 04:16:00.143957  112476 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.144157  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.144190  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.145715  112476 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1109 04:16:00.145773  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.145800  112476 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.145944  112476 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1109 04:16:00.145985  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.146006  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.146956  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.147183  112476 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1109 04:16:00.147261  112476 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.147459  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.147489  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.147555  112476 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1109 04:16:00.148722  112476 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1109 04:16:00.148761  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.148799  112476 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1109 04:16:00.148980  112476 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.149133  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.149164  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.150029  112476 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1109 04:16:00.150113  112476 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1109 04:16:00.150229  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.150235  112476 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.151115  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.151140  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.152193  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.152230  112476 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1109 04:16:00.152349  112476 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.152482  112476 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1109 04:16:00.152493  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.152512  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.153437  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.153681  112476 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1109 04:16:00.153710  112476 master.go:507] Enabling API group "storage.k8s.io".
I1109 04:16:00.153747  112476 master.go:496] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I1109 04:16:00.153780  112476 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1109 04:16:00.154120  112476 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.154462  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.154576  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.154464  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.155724  112476 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1109 04:16:00.156004  112476 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1109 04:16:00.156352  112476 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.156945  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.156965  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.158347  112476 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1109 04:16:00.158545  112476 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1109 04:16:00.158616  112476 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.158889  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.158933  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.161124  112476 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1109 04:16:00.161457  112476 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1109 04:16:00.162899  112476 watch_cache.go:409] Replace watchCache (rev: 30888) 
I1109 04:16:00.163210  112476 watch_cache.go:409] Replace watchCache (rev: 30888) 
I1109 04:16:00.164073  112476 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.164195  112476 watch_cache.go:409] Replace watchCache (rev: 30887) 
I1109 04:16:00.164457  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.164509  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.165596  112476 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1109 04:16:00.165702  112476 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1109 04:16:00.165944  112476 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.166207  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.166258  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.167820  112476 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1109 04:16:00.167879  112476 master.go:507] Enabling API group "apps".
I1109 04:16:00.168065  112476 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.168428  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.168583  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.168792  112476 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1109 04:16:00.170288  112476 watch_cache.go:409] Replace watchCache (rev: 30889) 
I1109 04:16:00.171488  112476 watch_cache.go:409] Replace watchCache (rev: 30889) 
I1109 04:16:00.171755  112476 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1109 04:16:00.171861  112476 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1109 04:16:00.171837  112476 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.172599  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.172631  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.174694  112476 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1109 04:16:00.174850  112476 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.175065  112476 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1109 04:16:00.175080  112476 watch_cache.go:409] Replace watchCache (rev: 30890) 
I1109 04:16:00.175546  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.175595  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.176575  112476 watch_cache.go:409] Replace watchCache (rev: 30890) 
I1109 04:16:00.176602  112476 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1109 04:16:00.176718  112476 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1109 04:16:00.176723  112476 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.176908  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.176934  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.178325  112476 watch_cache.go:409] Replace watchCache (rev: 30890) 
I1109 04:16:00.178383  112476 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1109 04:16:00.178489  112476 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1109 04:16:00.178521  112476 master.go:507] Enabling API group "admissionregistration.k8s.io".
I1109 04:16:00.178776  112476 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.179519  112476 watch_cache.go:409] Replace watchCache (rev: 30890) 
I1109 04:16:00.179622  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.179670  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:00.194557  112476 store.go:1342] Monitoring events count at <storage-prefix>//events
I1109 04:16:00.194603  112476 master.go:507] Enabling API group "events.k8s.io".
I1109 04:16:00.194683  112476 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1109 04:16:00.195021  112476 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.195445  112476 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.195807  112476 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.195946  112476 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.196138  112476 watch_cache.go:409] Replace watchCache (rev: 30891) 
I1109 04:16:00.196242  112476 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.196402  112476 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.196730  112476 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.196891  112476 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.197676  112476 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.198017  112476 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.199689  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.200145  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.201769  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.205023  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.211081  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.212474  112476 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.216736  112476 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.217625  112476 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.218631  112476 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.219153  112476 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.219282  112476 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1109 04:16:00.220216  112476 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.220483  112476 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.220887  112476 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.221894  112476 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.222987  112476 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.224188  112476 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.224589  112476 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.225610  112476 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.226669  112476 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.227035  112476 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.228038  112476 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.228173  112476 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1109 04:16:00.229322  112476 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.229768  112476 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.230478  112476 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.233024  112476 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.233725  112476 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.234582  112476 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.235508  112476 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.236474  112476 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.237197  112476 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.238033  112476 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.238975  112476 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.239136  112476 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1109 04:16:00.240021  112476 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.240777  112476 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.240907  112476 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1109 04:16:00.241655  112476 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.242372  112476 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.243213  112476 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.243642  112476 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.244398  112476 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.245129  112476 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.245855  112476 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.246537  112476 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.246655  112476 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1109 04:16:00.247695  112476 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.248680  112476 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.249108  112476 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.250068  112476 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.250448  112476 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.250833  112476 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.251861  112476 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.252229  112476 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.252601  112476 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.253624  112476 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.254019  112476 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.254392  112476 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.254530  112476 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1109 04:16:00.254576  112476 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1109 04:16:00.255429  112476 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.256401  112476 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.257316  112476 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.258089  112476 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 04:16:00.259163  112476 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"87c6aefe-e175-476b-9a34-2f22dccf8ed1", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 04:16:00.263841  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 04:16:00.264178  112476 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1109 04:16:00.264233  112476 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1109 04:16:00.264644  112476 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1109 04:16:00.264715  112476 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1109 04:16:00.268024  112476 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (1.231697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
I1109 04:16:00.268425  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.268453  112476 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1109 04:16:00.268465  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.268481  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.268501  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.268526  112476 httplog.go:90] GET /healthz: (259.758µs) 0 [Go-http-client/1.1 127.0.0.1:54964]
I1109 04:16:00.270932  112476 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=30884 labels= fields= timeout=5m47s
I1109 04:16:00.277575  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (8.944529ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:00.283479  112476 httplog.go:90] GET /api/v1/services: (3.09147ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:00.295806  112476 httplog.go:90] GET /api/v1/services: (1.468393ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:00.299350  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.299547  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.299609  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.299660  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.299783  112476 httplog.go:90] GET /healthz: (782.936µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:00.306852  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (7.101355ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54964]
I1109 04:16:00.307555  112476 httplog.go:90] GET /api/v1/services: (1.698494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54982]
I1109 04:16:00.307737  112476 httplog.go:90] GET /api/v1/services: (1.91049ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.313212  112476 httplog.go:90] POST /api/v1/namespaces: (4.395868ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54964]
I1109 04:16:00.315066  112476 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.320257ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.317836  112476 httplog.go:90] POST /api/v1/namespaces: (2.173806ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.319606  112476 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.334165ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.322142  112476 httplog.go:90] POST /api/v1/namespaces: (2.013967ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.364480  112476 shared_informer.go:227] caches populated
I1109 04:16:00.364516  112476 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1109 04:16:00.372164  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.372213  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.372231  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.372238  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.372272  112476 httplog.go:90] GET /healthz: (280.411µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.406691  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.406727  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.406736  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.406744  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.406783  112476 httplog.go:90] GET /healthz: (276.655µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.472159  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.472196  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.472205  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.472213  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.472245  112476 httplog.go:90] GET /healthz: (230.013µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.506704  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.506744  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.506756  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.506766  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.506812  112476 httplog.go:90] GET /healthz: (289.498µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.572178  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.572212  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.572224  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.572230  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.572266  112476 httplog.go:90] GET /healthz: (265.052µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.606818  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.606861  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.606874  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.606884  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.606924  112476 httplog.go:90] GET /healthz: (333.42µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.672212  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.672257  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.672271  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.672284  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.672320  112476 httplog.go:90] GET /healthz: (303.977µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.706754  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.706797  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.706809  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.706819  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.706855  112476 httplog.go:90] GET /healthz: (294.847µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.772236  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.772281  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.772294  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.772304  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.772348  112476 httplog.go:90] GET /healthz: (306.175µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.807038  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.807089  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.807103  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.807112  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.807153  112476 httplog.go:90] GET /healthz: (292.975µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.872176  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.872228  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.872241  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.872251  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.872287  112476 httplog.go:90] GET /healthz: (286.348µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.906740  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.906777  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.906790  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.906800  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.906851  112476 httplog.go:90] GET /healthz: (317.808µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:00.972147  112476 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 04:16:00.972202  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:00.972219  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:00.972228  112476 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:00.972264  112476 httplog.go:90] GET /healthz: (286.959µs) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:00.997988  112476 client.go:361] parsed scheme: "endpoint"
I1109 04:16:00.998088  112476 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 04:16:01.007774  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.007802  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:01.007812  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.007851  112476 httplog.go:90] GET /healthz: (1.327784ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.074869  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.074907  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:01.074918  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.074964  112476 httplog.go:90] GET /healthz: (3.005953ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:01.107878  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.107913  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:01.107937  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.107984  112476 httplog.go:90] GET /healthz: (1.442814ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.173286  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.173321  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:01.173332  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.173383  112476 httplog.go:90] GET /healthz: (1.37813ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:01.207687  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.207718  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:01.207729  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.207782  112476 httplog.go:90] GET /healthz: (1.330812ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.267319  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.000255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.271387  112476 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (6.834971ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.271525  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.149819ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.275115  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.275143  112476 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 04:16:01.275153  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.275201  112476 httplog.go:90] GET /healthz: (3.398054ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:01.276034  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.938961ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55140]
I1109 04:16:01.276153  112476 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (3.698041ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.276345  112476 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1109 04:16:01.278110  112476 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (1.581465ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.278907  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.040233ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55140]
I1109 04:16:01.280793  112476 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (2.244464ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.280887  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.420864ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55140]
I1109 04:16:01.281017  112476 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1109 04:16:01.281035  112476 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1109 04:16:01.282471  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (998.408µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.284518  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.686796ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.285852  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (961.156µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.287202  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.005381ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.288583  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (960.16µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.292982  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.954742ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.294930  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1109 04:16:01.296460  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.243646ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.300114  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.035296ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.302206  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1109 04:16:01.308178  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.308373  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (5.878912ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.308375  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.308700  112476 httplog.go:90] GET /healthz: (2.307427ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.311303  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.190673ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.311725  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1109 04:16:01.313496  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.510528ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.317847  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.635649ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.318214  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1109 04:16:01.320008  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.248704ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.322155  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.525148ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.322655  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1109 04:16:01.325600  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.555269ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.328461  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.058824ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.328680  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1109 04:16:01.330129  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.154802ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.332772  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.209378ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.333222  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1109 04:16:01.334975  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.447216ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.337297  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.818466ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.337546  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1109 04:16:01.338808  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.017814ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.342656  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.213053ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.343083  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1109 04:16:01.345019  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.194368ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.349215  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.649911ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.349852  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1109 04:16:01.351508  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.321003ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.354755  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.623089ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.355156  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1109 04:16:01.358073  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (2.520986ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.361587  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.915533ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.361995  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1109 04:16:01.363139  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (949.849µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.365628  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.795251ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.365914  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1109 04:16:01.367933  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.762927ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.370381  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801641ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.370625  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1109 04:16:01.372042  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.147587ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.374685  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.178062ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.374958  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.374986  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.375018  112476 httplog.go:90] GET /healthz: (3.204584ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:01.375041  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1109 04:16:01.376120  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (875.244µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.378730  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.991548ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.378987  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1109 04:16:01.380593  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.351273ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.384447  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.449366ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.384862  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1109 04:16:01.386870  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.619697ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.389513  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.152787ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.389788  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1109 04:16:01.391441  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.430114ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.393945  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.007377ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.394293  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1109 04:16:01.395643  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.112992ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.397867  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.7645ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.398300  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1109 04:16:01.400816  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.194147ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.403301  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.905151ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.403537  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1109 04:16:01.404667  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (866.129µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.406860  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.665631ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.407100  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1109 04:16:01.407574  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.407603  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.407642  112476 httplog.go:90] GET /healthz: (909.419µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.412317  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (4.831367ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.415442  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.189193ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.415735  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1109 04:16:01.417094  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.113312ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.419543  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.825484ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.419892  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1109 04:16:01.421687  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.440298ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.424606  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.387804ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.424929  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1109 04:16:01.426240  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.094885ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.428543  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.666825ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.428795  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1109 04:16:01.431686  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (2.649795ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.435422  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.09377ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.435638  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1109 04:16:01.437821  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.580858ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.440562  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.218036ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.440937  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1109 04:16:01.443968  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (2.645338ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.446610  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.103697ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.446891  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1109 04:16:01.448886  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.713273ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.451038  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.711017ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.451499  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1109 04:16:01.452840  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.068728ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.455622  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.11669ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.455898  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1109 04:16:01.457225  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.050634ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.459827  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.02479ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.460087  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1109 04:16:01.461457  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.053384ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.465744  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.748234ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.466068  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1109 04:16:01.467776  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.44509ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.470372  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.055977ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.470924  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1109 04:16:01.472768  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.472807  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.472844  112476 httplog.go:90] GET /healthz: (1.03723ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:01.473119  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.9685ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.476529  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.813528ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.476923  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1109 04:16:01.478322  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.028948ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.480971  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.968664ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.483201  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1109 04:16:01.484714  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.038468ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.487280  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.081398ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.487572  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1109 04:16:01.488947  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.071418ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.492798  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.317915ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.493087  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1109 04:16:01.494875  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.411492ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.497704  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.243522ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.497992  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1109 04:16:01.501711  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (3.437353ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.504257  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.011181ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.504490  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1109 04:16:01.505731  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (990.5µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.507085  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.507127  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.507164  112476 httplog.go:90] GET /healthz: (899.58µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.508200  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.064621ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.508473  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1109 04:16:01.509608  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (892.648µs) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.512107  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.137ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.512443  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1109 04:16:01.515914  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.927896ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.518472  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.996002ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.518868  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1109 04:16:01.522088  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (2.451343ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.527318  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.972254ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.528016  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1109 04:16:01.531180  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.988624ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.536139  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.084511ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.536540  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1109 04:16:01.538299  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.463032ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.541490  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.465043ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.541738  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1109 04:16:01.543241  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.217571ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.545832  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.006075ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.546066  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1109 04:16:01.548350  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (2.038262ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.551181  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.262523ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.551434  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1109 04:16:01.552772  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.079201ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.557155  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.835129ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.557452  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1109 04:16:01.558980  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.274239ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.561650  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.180387ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.561885  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1109 04:16:01.563739  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.566112ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.566325  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.126034ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.566761  112476 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1109 04:16:01.569141  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.10532ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.572970  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.343975ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.573401  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.573515  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.573566  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1109 04:16:01.573675  112476 httplog.go:90] GET /healthz: (1.611208ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:01.575549  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.387429ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.578018  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.970767ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.578350  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1109 04:16:01.587016  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.595456ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.607858  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.34519ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.607866  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.607923  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.607959  112476 httplog.go:90] GET /healthz: (1.306372ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.608215  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1109 04:16:01.626326  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.786473ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.647191  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.692757ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.647519  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1109 04:16:01.666691  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.495139ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.673826  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.674069  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.674283  112476 httplog.go:90] GET /healthz: (2.334894ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:01.690695  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.917497ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.691250  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1109 04:16:01.705999  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.510587ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.707640  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.707671  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.707708  112476 httplog.go:90] GET /healthz: (1.01867ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.727977  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.270309ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.728298  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1109 04:16:01.746445  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.892293ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.766956  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.407753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.767288  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1109 04:16:01.775246  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.775291  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.775361  112476 httplog.go:90] GET /healthz: (3.285478ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:01.786548  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.046826ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.807754  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.150468ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.808043  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1109 04:16:01.808363  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.808535  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.808755  112476 httplog.go:90] GET /healthz: (2.275291ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.827102  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.549902ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.847355  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.814593ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.847710  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1109 04:16:01.866589  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.067885ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.873195  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.873235  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.873302  112476 httplog.go:90] GET /healthz: (1.277321ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:01.888583  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.562564ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.888924  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1109 04:16:01.906188  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.649025ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:01.907379  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.907484  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.907530  112476 httplog.go:90] GET /healthz: (1.189139ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.927515  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.996635ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.928117  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1109 04:16:01.946462  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.950269ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.967898  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.331715ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:01.968188  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1109 04:16:01.973249  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:01.973311  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:01.973382  112476 httplog.go:90] GET /healthz: (1.396579ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:01.986528  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.983718ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.011694  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.134634ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.011927  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.011963  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.012001  112476 httplog.go:90] GET /healthz: (5.593403ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.012251  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1109 04:16:02.026213  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.714408ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.046857  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.263315ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.047109  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1109 04:16:02.066053  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.576004ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.073452  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.073491  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.073548  112476 httplog.go:90] GET /healthz: (1.431208ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:02.087088  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.321354ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.087461  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1109 04:16:02.106227  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.800666ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.107789  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.107819  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.107859  112476 httplog.go:90] GET /healthz: (1.081785ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.127035  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.527017ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.127289  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1109 04:16:02.146744  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.257202ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.167724  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.220036ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.168161  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1109 04:16:02.173749  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.173790  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.173827  112476 httplog.go:90] GET /healthz: (1.338151ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:02.186859  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.390979ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.208702  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.059163ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.209086  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1109 04:16:02.209163  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.209188  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.209229  112476 httplog.go:90] GET /healthz: (2.883118ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.226054  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.555779ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.249140  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.506108ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.249868  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1109 04:16:02.266601  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.005893ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.273840  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.273882  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.273969  112476 httplog.go:90] GET /healthz: (1.900627ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:02.287678  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.143956ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.288032  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1109 04:16:02.306536  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.840786ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.307597  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.307726  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.307876  112476 httplog.go:90] GET /healthz: (1.507034ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.328349  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.760915ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.328815  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1109 04:16:02.348719  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (3.821469ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.367859  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.348832ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.368300  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1109 04:16:02.373776  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.374075  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.374329  112476 httplog.go:90] GET /healthz: (2.067541ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:02.386748  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.050461ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.407387  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.787174ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.407694  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1109 04:16:02.407838  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.407859  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.407894  112476 httplog.go:90] GET /healthz: (1.416483ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.426637  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (2.112639ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.446953  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.482286ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.447490  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1109 04:16:02.466103  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.59015ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.473863  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.473914  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.473960  112476 httplog.go:90] GET /healthz: (1.816139ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:02.487593  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.181775ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.487958  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1109 04:16:02.506279  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.706572ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.507366  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.507401  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.507469  112476 httplog.go:90] GET /healthz: (1.110788ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.527608  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.069336ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.527908  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1109 04:16:02.546234  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.697194ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.568972  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.361262ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.569226  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1109 04:16:02.576700  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.576751  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.576819  112476 httplog.go:90] GET /healthz: (4.864357ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:02.586583  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.08767ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.607301  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.836836ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.607624  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1109 04:16:02.609227  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.609257  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.609293  112476 httplog.go:90] GET /healthz: (1.374271ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.626995  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.458777ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.651083  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.518474ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.651636  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1109 04:16:02.666716  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (2.260072ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.674820  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.674865  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.675179  112476 httplog.go:90] GET /healthz: (2.269747ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:02.691272  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.792367ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.691716  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1109 04:16:02.707713  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (3.200181ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.709783  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.709832  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.709898  112476 httplog.go:90] GET /healthz: (3.325766ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.729924  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.359282ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.730397  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1109 04:16:02.747183  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.692013ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.771933  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.340026ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.772266  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1109 04:16:02.773760  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.773795  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.773875  112476 httplog.go:90] GET /healthz: (1.675006ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:02.786011  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.640984ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.809081  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.538851ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:02.809374  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1109 04:16:02.809566  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.809589  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.809623  112476 httplog.go:90] GET /healthz: (3.156315ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.828221  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (3.691808ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.849632  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.148682ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.850108  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1109 04:16:02.865837  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.376775ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.875120  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.875156  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.875205  112476 httplog.go:90] GET /healthz: (3.266697ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:02.888557  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.156405ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.888865  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1109 04:16:02.906060  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.575322ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.907368  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.907397  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.907460  112476 httplog.go:90] GET /healthz: (876.536µs) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.926588  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.178935ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.926854  112476 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1109 04:16:02.946720  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.555953ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.949724  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.477597ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.968803  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.303137ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.969111  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1109 04:16:02.973726  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:02.973759  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:02.973798  112476 httplog.go:90] GET /healthz: (1.409478ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:02.986604  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.435333ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:02.988661  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.558922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.009001  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.523279ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.010774  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.010809  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1109 04:16:03.010818  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.010927  112476 httplog.go:90] GET /healthz: (3.866477ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.026680  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.229434ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.031852  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.605541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.047221  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.824265ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.047715  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1109 04:16:03.067526  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.887935ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.069642  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.630937ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.072914  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.072945  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.073002  112476 httplog.go:90] GET /healthz: (1.097958ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:03.087054  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.664938ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.087722  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1109 04:16:03.106188  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.660651ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.107913  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.107947  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.108047  112476 httplog.go:90] GET /healthz: (1.513093ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.108105  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.389417ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.127605  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.021721ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.128001  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1109 04:16:03.146133  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.62987ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.148235  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.489917ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.167041  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.522761ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.167484  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1109 04:16:03.173193  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.173229  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.173298  112476 httplog.go:90] GET /healthz: (1.371559ms) 0 [Go-http-client/1.1 127.0.0.1:54980]
I1109 04:16:03.188403  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (3.895335ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.190686  112476 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.621199ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.207543  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.207580  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.207627  112476 httplog.go:90] GET /healthz: (1.15501ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.207805  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.265544ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.208252  112476 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1109 04:16:03.226867  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.0282ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.228840  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.322908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.248892  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.380631ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.249482  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1109 04:16:03.266350  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.826971ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.269037  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.803137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.273009  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.273220  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.273608  112476 httplog.go:90] GET /healthz: (1.721133ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:03.287361  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.849344ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.287683  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1109 04:16:03.306242  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.700596ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.307313  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.307365  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.307397  112476 httplog.go:90] GET /healthz: (1.116873ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.308393  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.416875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.327456  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.931877ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.327943  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1109 04:16:03.346185  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.619828ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.348461  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.638563ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.376670  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.835331ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.377013  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1109 04:16:03.378965  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.379352  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.379550  112476 httplog.go:90] GET /healthz: (2.030998ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:03.385901  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.556682ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.388376  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.538013ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.407548  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.407705  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.407749  112476 httplog.go:90] GET /healthz: (1.328966ms) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.408254  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.676751ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.408671  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1109 04:16:03.426089  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.69341ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.428977  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.770771ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.455551  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (10.258705ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.455816  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1109 04:16:03.465941  112476 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.560216ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.468177  112476 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.680862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.473274  112476 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 04:16:03.473337  112476 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 04:16:03.473456  112476 httplog.go:90] GET /healthz: (1.50716ms) 0 [Go-http-client/1.1 127.0.0.1:54966]
I1109 04:16:03.487474  112476 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.994897ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.487783  112476 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1109 04:16:03.508165  112476 httplog.go:90] GET /healthz: (1.512796ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.509856  112476 httplog.go:90] GET /api/v1/namespaces/default: (1.23056ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.512607  112476 httplog.go:90] POST /api/v1/namespaces: (2.258895ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.514379  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.370157ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.520681  112476 httplog.go:90] POST /api/v1/namespaces/default/services: (5.732614ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.522466  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.167253ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.526553  112476 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (3.679488ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.573906  112476 httplog.go:90] GET /healthz: (1.869369ms) 200 [Go-http-client/1.1 127.0.0.1:54966]
W1109 04:16:03.575388  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575439  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575618  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575735  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575759  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575772  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575780  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575791  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575803  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575818  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.575833  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 04:16:03.575934  112476 factory.go:300] Creating scheduler from algorithm provider 'DefaultProvider'
I1109 04:16:03.575978  112476 factory.go:392] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1109 04:16:03.577762  112476 reflector.go:153] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.577785  112476 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.578298  112476 reflector.go:153] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.578314  112476 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.578735  112476 reflector.go:153] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.578746  112476 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.579205  112476 reflector.go:153] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.579216  112476 reflector.go:188] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.579363  112476 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (674.732µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.579549  112476 reflector.go:153] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.579561  112476 reflector.go:188] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.579566  112476 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.331138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.580809  112476 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (814.188µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55472]
I1109 04:16:03.580995  112476 reflector.go:153] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.581023  112476 reflector.go:188] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.581394  112476 reflector.go:153] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.581455  112476 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.581803  112476 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (805.595µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55474]
I1109 04:16:03.581895  112476 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (548.409µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55472]
I1109 04:16:03.582021  112476 reflector.go:153] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.582042  112476 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.582386  112476 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.582425  112476 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.582730  112476 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.582753  112476 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.583173  112476 reflector.go:153] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.583198  112476 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.583841  112476 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (845.587µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:16:03.584764  112476 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (1.300448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55472]
I1109 04:16:03.585899  112476 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (899.818µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55482]
I1109 04:16:03.586199  112476 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (717.264µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55480]
I1109 04:16:03.584274  112476 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (658.658µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55474]
I1109 04:16:03.591785  112476 get.go:251] Starting watch for /api/v1/services, rv=31360 labels= fields= timeout=9m16s
I1109 04:16:03.592170  112476 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (819.68µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:16:03.595247  112476 get.go:251] Starting watch for /api/v1/nodes, rv=30884 labels= fields= timeout=6m22s
I1109 04:16:03.596171  112476 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30884 labels= fields= timeout=7m2s
I1109 04:16:03.596561  112476 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30887 labels= fields= timeout=7m28s
I1109 04:16:03.598319  112476 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30887 labels= fields= timeout=9m32s
I1109 04:16:03.599110  112476 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30886 labels= fields= timeout=9m43s
I1109 04:16:03.605773  112476 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30884 labels= fields= timeout=6m26s
I1109 04:16:03.605776  112476 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30885 labels= fields= timeout=7m29s
I1109 04:16:03.606665  112476 get.go:251] Starting watch for /api/v1/pods, rv=30884 labels= fields= timeout=9m52s
I1109 04:16:03.607063  112476 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30888 labels= fields= timeout=7m15s
I1109 04:16:03.609220  112476 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30889 labels= fields= timeout=5m13s
I1109 04:16:03.677387  112476 shared_informer.go:227] caches populated
I1109 04:16:03.677687  112476 shared_informer.go:227] caches populated
I1109 04:16:03.677790  112476 shared_informer.go:227] caches populated
I1109 04:16:03.677879  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678035  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678136  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678212  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678286  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678367  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678566  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678659  112476 shared_informer.go:227] caches populated
I1109 04:16:03.678981  112476 shared_informer.go:227] caches populated
I1109 04:16:03.679185  112476 plugins.go:631] Loaded volume plugin "kubernetes.io/mock-provisioner"
W1109 04:16:03.679306  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.679431  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.679538  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.679606  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 04:16:03.679670  112476 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 04:16:03.679802  112476 pv_controller_base.go:289] Starting persistent volume controller
I1109 04:16:03.680324  112476 shared_informer.go:197] Waiting for caches to sync for persistent volume
I1109 04:16:03.680627  112476 reflector.go:153] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.680643  112476 reflector.go:153] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.680665  112476 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.680653  112476 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.680627  112476 reflector.go:153] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.680734  112476 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.681857  112476 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (617.347µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55522]
I1109 04:16:03.681870  112476 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (633.74µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55520]
I1109 04:16:03.682222  112476 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (444.694µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55528]
I1109 04:16:03.682503  112476 reflector.go:153] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.682520  112476 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.682671  112476 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30884 labels= fields= timeout=7m24s
I1109 04:16:03.682814  112476 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30887 labels= fields= timeout=9m45s
I1109 04:16:03.682817  112476 get.go:251] Starting watch for /api/v1/nodes, rv=30884 labels= fields= timeout=7m45s
I1109 04:16:03.683249  112476 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (349.412µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55530]
I1109 04:16:03.683302  112476 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.683314  112476 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1109 04:16:03.684035  112476 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (341.048µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55530]
I1109 04:16:03.684077  112476 get.go:251] Starting watch for /api/v1/pods, rv=30884 labels= fields= timeout=7m32s
I1109 04:16:03.684720  112476 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30884 labels= fields= timeout=8m42s
I1109 04:16:03.783181  112476 shared_informer.go:227] caches populated
I1109 04:16:03.783261  112476 shared_informer.go:227] caches populated
I1109 04:16:03.783270  112476 shared_informer.go:227] caches populated
I1109 04:16:03.783297  112476 shared_informer.go:227] caches populated
I1109 04:16:03.783303  112476 shared_informer.go:227] caches populated
I1109 04:16:03.783698  112476 shared_informer.go:227] caches populated
I1109 04:16:03.783721  112476 shared_informer.go:204] Caches are synced for persistent volume 
I1109 04:16:03.783741  112476 pv_controller_base.go:160] controller initialized
I1109 04:16:03.783802  112476 pv_controller_base.go:426] resyncing PV controller
I1109 04:16:03.793032  112476 node_tree.go:86] Added node "node-1" in group "" to NodeTree
I1109 04:16:03.793712  112476 httplog.go:90] POST /api/v1/nodes: (9.765741ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.799638  112476 httplog.go:90] POST /api/v1/nodes: (5.52252ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.800249  112476 node_tree.go:86] Added node "node-2" in group "" to NodeTree
I1109 04:16:03.803081  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.347333ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.805579  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.085441ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.805876  112476 volume_binding_test.go:191] Running test wait can bind
I1109 04:16:03.807901  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.808934ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.809828  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.575023ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.817722  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind", version 31386
I1109 04:16:03.817795  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:03.817819  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1109 04:16:03.817827  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Available
I1109 04:16:03.819139  112476 httplog.go:90] POST /api/v1/persistentvolumes: (8.68224ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.821536  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (3.053911ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:03.821892  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31387
I1109 04:16:03.821920  112476 pv_controller.go:796] volume "pv-w-canbind" entered phase "Available"
I1109 04:16:03.823057  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31387
I1109 04:16:03.823090  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "", boundByController: false
I1109 04:16:03.823109  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind]: volume is unused
I1109 04:16:03.823117  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Available
I1109 04:16:03.823128  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind]: phase Available already set
I1109 04:16:03.827768  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (8.000486ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.828996  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind", version 31388
I1109 04:16:03.829069  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:03.829101  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: no volume found
I1109 04:16:03.829127  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind] status: set phase Pending
I1109 04:16:03.829153  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind] status: phase Pending already set
I1109 04:16:03.829569  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-canbind", UID:"990bc4f3-fa1b-48f5-956b-eb334f2d9e3f", APIVersion:"v1", ResourceVersion:"31388", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:03.835892  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (6.219803ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:03.843249  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (13.66079ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.844273  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind
I1109 04:16:03.844311  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind
I1109 04:16:03.844653  112476 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind" on node "node-1"
I1109 04:16:03.844787  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" on node "node-2"
I1109 04:16:03.844811  112476 scheduler_binder.go:725] storage class "wait-vcqw" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" does not support dynamic provisioning
I1109 04:16:03.844925  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind", node "node-1"
I1109 04:16:03.844993  112476 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind", version 31387
I1109 04:16:03.845116  112476 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind", node "node-1"
I1109 04:16:03.845140  112476 scheduler_binder.go:404] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" bound to volume "pv-w-canbind"
I1109 04:16:03.848614  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind: (3.072639ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.849082  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31391
I1109 04:16:03.849135  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:03.849148  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind
I1109 04:16:03.849159  112476 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.849168  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:03.849193  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:03.849227  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" with version 31388
I1109 04:16:03.849241  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:03.849274  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:03.849288  112476 pv_controller.go:929] binding volume "pv-w-canbind" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.849304  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.849332  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.849344  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1109 04:16:03.852272  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.561386ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.852730  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31392
I1109 04:16:03.852759  112476 pv_controller.go:796] volume "pv-w-canbind" entered phase "Bound"
I1109 04:16:03.852774  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: binding to "pv-w-canbind"
I1109 04:16:03.852797  112476 pv_controller.go:899] volume "pv-w-canbind" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.852937  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31392
I1109 04:16:03.852986  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:03.853001  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind
I1109 04:16:03.853019  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:03.853043  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:03.856653  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind: (3.595093ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.856997  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" with version 31393
I1109 04:16:03.857062  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: bound to "pv-w-canbind"
I1109 04:16:03.857075  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind] status: set phase Bound
I1109 04:16:03.861689  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind/status: (4.202101ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:03.862155  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" with version 31394
I1109 04:16:03.862187  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" entered phase "Bound"
I1109 04:16:03.862207  112476 pv_controller.go:955] volume "pv-w-canbind" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.862235  112476 pv_controller.go:956] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:03.862253  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1109 04:16:03.862295  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" with version 31394
I1109 04:16:03.862308  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1109 04:16:03.862326  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: volume "pv-w-canbind" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:03.862336  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: claim is already correctly bound
I1109 04:16:03.862346  112476 pv_controller.go:929] binding volume "pv-w-canbind" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.862357  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.862376  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.862387  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Bound
I1109 04:16:03.862401  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind]: phase Bound already set
I1109 04:16:03.862433  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: binding to "pv-w-canbind"
I1109 04:16:03.862452  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind]: already bound to "pv-w-canbind"
I1109 04:16:03.862480  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind] status: set phase Bound
I1109 04:16:03.862510  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind] status: phase Bound already set
I1109 04:16:03.862525  112476 pv_controller.go:955] volume "pv-w-canbind" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind"
I1109 04:16:03.862556  112476 pv_controller.go:956] volume "pv-w-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:03.862571  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" status after binding: phase: Bound, bound to: "pv-w-canbind", bindCompleted: true, boundByController: true
I1109 04:16:03.947174  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.259466ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.099261  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (54.700579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.149220  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (4.74752ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.248073  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.049863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.346699  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.156897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.446761  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.168954ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.546623  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.089949ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.575790  112476 cache.go:656] Couldn't expire cache for pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind. Binding is still in progress.
I1109 04:16:04.646367  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (1.823246ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.746561  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.045575ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.846804  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (2.190736ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.849437  112476 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind" are bound
I1109 04:16:04.849541  112476 factory.go:698] Attempting to bind pod-w-canbind to node-1
I1109 04:16:04.854940  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind/binding: (4.933991ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.855400  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:16:04.858941  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.019312ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.946548  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind: (1.97619ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.948862  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind: (1.736039ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.950897  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind: (1.588136ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.960106  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (8.548737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.965013  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (4.356664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.965173  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" deleted
I1109 04:16:04.965213  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31392
I1109 04:16:04.965251  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:04.965260  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind
I1109 04:16:04.968475  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind: (2.976646ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.968752  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind not found
I1109 04:16:04.968785  112476 pv_controller.go:573] volume "pv-w-canbind" is released and reclaim policy "Retain" will be executed
I1109 04:16:04.968801  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind]: set phase Released
I1109 04:16:04.971788  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind/status: (2.583554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:04.972034  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31431
I1109 04:16:04.972063  112476 pv_controller.go:796] volume "pv-w-canbind" entered phase "Released"
I1109 04:16:04.972076  112476 pv_controller.go:1009] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1109 04:16:04.973286  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind" with version 31431
I1109 04:16:04.973333  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind (uid: 990bc4f3-fa1b-48f5-956b-eb334f2d9e3f)", boundByController: true
I1109 04:16:04.973348  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind
I1109 04:16:04.973379  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind not found
I1109 04:16:04.973386  112476 pv_controller.go:1009] reclaimVolume[pv-w-canbind]: policy is Retain, nothing to do
I1109 04:16:04.973741  112476 store.go:231] deletion of /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-w-canbind failed because of a conflict, going to retry
I1109 04:16:04.975320  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.665527ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:04.976015  112476 pv_controller_base.go:216] volume "pv-w-canbind" deleted
I1109 04:16:04.976107  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind" was already processed
I1109 04:16:04.991050  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.281179ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:04.992004  112476 volume_binding_test.go:191] Running test wait cannot bind
I1109 04:16:05.001043  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (8.71511ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.003495  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.004746ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.006121  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (1.948936ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.006291  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind", version 31440
I1109 04:16:05.006321  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:05.006345  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind]: no volume found
I1109 04:16:05.006367  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind] status: set phase Pending
I1109 04:16:05.006386  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind] status: phase Pending already set
I1109 04:16:05.006655  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-cannotbind", UID:"ca9b8910-7c43-4e6e-90c3-5690f88c3e31", APIVersion:"v1", ResourceVersion:"31440", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:05.008938  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.200223ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.009449  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (2.593287ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:05.010199  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind
I1109 04:16:05.010245  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind
I1109 04:16:05.010489  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" on node "node-2"
I1109 04:16:05.010499  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" on node "node-1"
I1109 04:16:05.010517  112476 scheduler_binder.go:725] storage class "wait-r2qk" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" does not support dynamic provisioning
I1109 04:16:05.010522  112476 scheduler_binder.go:725] storage class "wait-r2qk" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" does not support dynamic provisioning
I1109 04:16:05.010594  112476 factory.go:632] Unable to schedule volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1109 04:16:05.010663  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:05.014663  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind: (3.69532ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:05.015525  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind/status: (3.532493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55818]
I1109 04:16:05.015926  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (4.25583ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
E1109 04:16:05.016382  112476 factory.go:673] pod is already present in the activeQ
I1109 04:16:05.017455  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind: (1.4973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55818]
I1109 04:16:05.017740  112476 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind on any node.
I1109 04:16:05.017970  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind
I1109 04:16:05.017987  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind
I1109 04:16:05.018169  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" on node "node-1"
I1109 04:16:05.018213  112476 scheduler_binder.go:725] storage class "wait-r2qk" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" does not support dynamic provisioning
I1109 04:16:05.018261  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" on node "node-2"
I1109 04:16:05.018277  112476 scheduler_binder.go:725] storage class "wait-r2qk" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" does not support dynamic provisioning
I1109 04:16:05.018313  112476 factory.go:632] Unable to schedule volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1109 04:16:05.018355  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:05.020320  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind: (1.750929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:05.020488  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind: (1.474649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.020745  112476 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind on any node.
I1109 04:16:05.021136  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.00405ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55820]
I1109 04:16:05.113001  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind: (2.672896ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.115900  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-cannotbind: (1.84109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.125917  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind
I1109 04:16:05.126196  112476 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind
I1109 04:16:05.126933  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (9.450294ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.129142  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.232111ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:05.134995  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (7.619192ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.136362  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind" deleted
I1109 04:16:05.137547  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.87097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.146762  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.654509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.147458  112476 volume_binding_test.go:191] Running test wait pvc prebound
I1109 04:16:05.149551  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.826785ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.151943  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.674753ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.155265  112476 httplog.go:90] POST /api/v1/persistentvolumes: (2.596407ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.155818  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-pvc-prebound", version 31456
I1109 04:16:05.155850  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:05.155874  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1109 04:16:05.155883  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1109 04:16:05.160975  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound", version 31457
I1109 04:16:05.161014  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:05.161041  112476 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1109 04:16:05.161063  112476 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Pending, bound to: "", boundByController: false
I1109 04:16:05.161083  112476 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume is unbound, binding
I1109 04:16:05.161101  112476 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:05.161113  112476 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:05.161136  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1109 04:16:05.161192  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (5.085092ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.161221  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (5.151658ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:05.161432  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 31458
I1109 04:16:05.161457  112476 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Available"
I1109 04:16:05.161484  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 31458
I1109 04:16:05.161501  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 04:16:05.161524  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1109 04:16:05.161531  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1109 04:16:05.161540  112476 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1109 04:16:05.163387  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.750394ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
I1109 04:16:05.163670  112476 pv_controller.go:850] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:05.163703  112476 pv_controller.go:932] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound": failed saving the volume: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:05.163731  112476 pv_controller_base.go:251] could not sync claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:05.164580  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:05.164606  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:05.164754  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (2.947001ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55840]
E1109 04:16:05.164831  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:05.164862  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:05.164889  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:05.164915  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:05.168150  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.295408ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55842]
I1109 04:16:05.171809  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (6.580753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.172314  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound/status: (7.167773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55532]
E1109 04:16:05.172675  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:05.268470  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.715641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.370743  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.810097ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.471495  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.795011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.568388  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.761804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.667653  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.033804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.767793  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.135202ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.867572  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.943429ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:05.970720  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.038255ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.067939  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.258845ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.168017  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.367126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.267796  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.065291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.367806  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.168299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.467810  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.114132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.568348  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.259816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.667632  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.022672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.767810  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.132365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.867825  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.137877ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:06.967754  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.059689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.069197  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.503605ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.167602  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.87165ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.267749  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.134465ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.368667  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.02973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.467714  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.017588ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.567549  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.854843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.667803  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.154493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.767335  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.69605ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.867568  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.955395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:07.970853  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.246138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.068352  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.693045ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.167850  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.212001ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.270288  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.63397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.368396  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.470826ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.467709  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.033139ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.567825  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.194161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.668360  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.658276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.768092  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.437474ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.867482  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.843459ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:08.967973  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.318052ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.067628  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.02396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.167473  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.873143ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.267734  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.153212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.367299  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.644359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.467723  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.025088ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.570114  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.199529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.667750  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.075962ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.767934  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.262462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.867693  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.000251ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:09.970655  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.713836ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.067566  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.863607ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.167944  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.298546ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.273929  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (7.731062ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.367703  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.874071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.468508  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.73485ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.567776  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.095579ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.667644  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.02856ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.767576  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.716941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.867716  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.052707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:10.967568  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.935259ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.067555  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.975835ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.167465  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.896686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.268648  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.341541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.367560  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.827467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.467833  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.222558ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.567267  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.6404ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.667743  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.091628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.769982  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.343916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.867345  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.737182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:11.967518  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.928733ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.070788  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.22961ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.167662  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.797145ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.269090  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.512581ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.368195  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.225326ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.468920  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.032096ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.567455  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.837475ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.667671  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.058906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.767901  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.297031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.868237  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.301024ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:12.967625  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.919967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.067937  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.21026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.167453  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.816637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.269299  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.917132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.367763  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.147207ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.468555  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.937042ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.513880  112476 httplog.go:90] GET /api/v1/namespaces/default: (4.70011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.516034  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.690753ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.518020  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.48398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.567972  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.37032ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.667598  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.794118ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.767623  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.042957ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.870565  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.831305ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:13.967102  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.473816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.067933  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.099513ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.167877  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.184741ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.268239  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.554862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.368488  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.774851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.467691  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.058751ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.567804  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.01646ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.667786  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.070267ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.768634  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.063359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.867611  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.882598ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:14.969253  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.258983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.068920  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.93897ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.169376  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.808016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.268880  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.911646ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.369242  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.271903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.468954  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.928395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.568510  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.688244ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.672388  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (6.395498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.770741  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.172427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.868576  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.627647ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:15.968091  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.323077ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.068855  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.227629ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.167573  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.999913ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.268583  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.04679ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.367713  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.017878ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.467909  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.337834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.567219  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.604547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.667397  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.719501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.767332  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.790648ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.868315  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.728681ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:16.967666  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.873138ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.067751  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.054915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.167470  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.820026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.268042  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.387725ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.367304  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.639517ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.468284  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.638789ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.572445  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (6.689087ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.667568  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.886395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.772480  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (6.784217ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.867351  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.630925ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:17.968059  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.325759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.067651  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.018715ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.167692  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.028613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.268442  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.670654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.367357  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.67958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.467530  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.858816ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.567189  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.575135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.667612  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.870427ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.768474  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.812208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.784070  112476 pv_controller_base.go:426] resyncing PV controller
I1109 04:16:18.784188  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 31458
I1109 04:16:18.784237  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 04:16:18.784257  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is unused
I1109 04:16:18.784265  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Available
I1109 04:16:18.784274  112476 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Available already set
I1109 04:16:18.784302  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" with version 31457
I1109 04:16:18.784324  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:18.784339  112476 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1109 04:16:18.784354  112476 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1109 04:16:18.784371  112476 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume is unbound, binding
I1109 04:16:18.784399  112476 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:18.784428  112476 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:18.784471  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" bound to volume "pv-w-pvc-prebound"
I1109 04:16:18.788714  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:18.788747  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
E1109 04:16:18.788937  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:18.789013  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:18.789053  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:18.789070  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:18.789786  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32829
I1109 04:16:18.789845  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:18.789857  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound
I1109 04:16:18.789877  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:18.789888  112476 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1109 04:16:18.789896  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 04:16:18.790511  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound: (5.583044ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.792615  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32829
I1109 04:16:18.792648  112476 pv_controller.go:860] updating PersistentVolume[pv-w-pvc-prebound]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:18.792661  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 04:16:18.793163  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.178119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:18.793458  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32831
I1109 04:16:18.793490  112476 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Bound"
I1109 04:16:18.793708  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.821636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:18.794311  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32831
I1109 04:16:18.794370  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:18.794385  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound
I1109 04:16:18.794388  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.838703ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55842]
I1109 04:16:18.794426  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:18.794444  112476 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1109 04:16:18.794452  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 04:16:18.794463  112476 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 04:16:18.796882  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (3.962732ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55550]
I1109 04:16:18.797159  112476 pv_controller.go:788] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:18.797187  112476 pv_controller.go:938] error binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:18.797205  112476 pv_controller_base.go:251] could not sync claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:18.867325  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.653757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:18.967668  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.981237ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.068008  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.320754ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.168308  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.677132ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.268218  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.615235ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.367850  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.160779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.467795  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.032212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.567464  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.775454ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.674296  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (8.527089ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.767663  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.078865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.867617  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.04021ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:19.967509  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.822442ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.067748  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.139248ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.171032  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.422516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.267387  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.825863ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.367869  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.259901ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.467617  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.94983ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.567545  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.889082ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.581641  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:20.581677  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
E1109 04:16:20.581888  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:20.581947  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:20.581984  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:20.582004  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:20.587054  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.240887ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:20.587159  112476 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events/pod-w-pvc-prebound.15d563704b70f3da: (4.398125ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.667334  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.689425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.767396  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.789335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.871069  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.386529ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:20.967765  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.116776ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.068042  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.224026ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.168222  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.47757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.268226  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.42746ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.367705  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.969738ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.469337  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.445008ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.567944  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.215016ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.667828  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.076202ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.768117  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.44014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.868194  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.499011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:21.968457  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.654011ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.068006  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.245283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.168219  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.49159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.267696  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.101232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.367592  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.927375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.467464  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.805478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.567835  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.102636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.667791  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.971985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.767863  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.226444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.868695  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.021737ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:22.967642  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.000137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.067064  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.483334ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.167364  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.717693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.267585  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.910533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.367570  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.906351ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.469261  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.592135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.510958  112476 httplog.go:90] GET /api/v1/namespaces/default: (1.722373ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.512745  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.393544ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.514151  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.112654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.567774  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.085521ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.667665  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.005247ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.767935  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.895498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.867389  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.716113ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:23.967224  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.578149ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.069557  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.903131ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.167354  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.698528ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.268177  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.970414ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.367589  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.851733ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.467208  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.598141ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.567402  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.767684ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.667659  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.038395ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.767884  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.254263ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.867390  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.688438ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:24.967614  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.943071ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.067709  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.035488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.170559  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.882854ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.268151  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.419337ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.370535  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.829185ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.467925  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.306663ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.567719  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.064302ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.667376  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.752862ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.768240  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.570111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.867260  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.643284ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:25.967522  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.824238ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.070226  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.607017ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.167592  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.902927ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.267441  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.801051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.367806  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.1924ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.469142  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.365193ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.571232  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.483882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.669728  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.87161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.767817  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.085303ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.867756  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.914223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:26.967437  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.788731ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.067882  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.180667ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.167885  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.37694ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.267841  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.221757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.367636  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.984089ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.467294  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.623833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.567598  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.939321ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.667612  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.964681ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.771644  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.997711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.867297  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.616004ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:27.967516  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.749177ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.067264  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.624494ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.167752  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.036182ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.267290  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.672902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.367696  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.901168ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.467294  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.522408ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.568636  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.067896ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.667475  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.894313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.769195  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.489649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.882715  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (17.038941ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:28.968480  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.738127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.068313  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.687591ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.167591  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.706323ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.267793  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.143708ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.367444  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.688533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.467592  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.871696ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.567527  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.880706ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.667788  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.212693ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.767803  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.18707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.870598  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.99557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:29.967774  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.112989ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.068073  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.381866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.167337  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.78295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.267782  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.130798ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.369263  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.611137ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.467794  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.122222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.568058  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.277456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.667890  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.246039ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.767928  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.237809ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.868156  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.401019ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:30.968368  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.60654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.067948  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.206107ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.167924  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.187257ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.268223  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.49002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.367934  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.089411ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.467939  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.151638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.567819  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.022066ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.667902  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.138645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.767904  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.175509ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.867923  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.131501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:31.967824  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.125967ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.068011  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.219804ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.168064  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.299707ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.267878  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.140487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.368271  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.49006ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.467932  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.132859ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.567821  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.161641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.668055  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.288654ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.767917  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.224314ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.867739  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.953222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:32.968890  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.121103ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.068650  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.095649ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.167782  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.042951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.267284  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.56551ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.367559  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.962172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.467618  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.906143ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.511467  112476 httplog.go:90] GET /api/v1/namespaces/default: (1.803254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.515322  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.036642ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.517460  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.625616ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.567461  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.85778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.667660  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.926858ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.769065  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.432608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.784318  112476 pv_controller_base.go:426] resyncing PV controller
I1109 04:16:33.784501  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32831
I1109 04:16:33.784579  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:33.784594  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound
I1109 04:16:33.784600  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" with version 31457
I1109 04:16:33.784622  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound found: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:33.784636  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: phase: Pending, bound to: "pv-w-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:33.784638  112476 pv_controller.go:617] synchronizing PersistentVolume[pv-w-pvc-prebound]: all is bound
I1109 04:16:33.784647  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 04:16:33.784652  112476 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested
I1109 04:16:33.784656  112476 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 04:16:33.784674  112476 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:33.784689  112476 pv_controller.go:388] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume already bound, finishing the binding
I1109 04:16:33.784699  112476 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.784709  112476 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.784739  112476 pv_controller.go:839] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.784748  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 04:16:33.784757  112476 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 04:16:33.784767  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1109 04:16:33.784784  112476 pv_controller.go:899] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.788969  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:33.788992  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
E1109 04:16:33.789164  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:33.789213  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:33.789238  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:33.789252  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-w-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:33.789903  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-prebound: (4.698007ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.790195  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" with version 33927
I1109 04:16:33.790222  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: bound to "pv-w-pvc-prebound"
I1109 04:16:33.790234  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound] status: set phase Bound
I1109 04:16:33.791341  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.717573ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:33.792821  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-prebound/status: (2.060833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.793085  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" with version 33928
I1109 04:16:33.793110  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" entered phase "Bound"
I1109 04:16:33.793136  112476 pv_controller.go:955] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.793164  112476 pv_controller.go:956] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:33.793181  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1109 04:16:33.793213  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" with version 33928
I1109 04:16:33.793227  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1109 04:16:33.793254  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: volume "pv-w-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:33.793263  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: claim is already correctly bound
I1109 04:16:33.793272  112476 pv_controller.go:929] binding volume "pv-w-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.793283  112476 pv_controller.go:827] updating PersistentVolume[pv-w-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.793302  112476 pv_controller.go:839] updating PersistentVolume[pv-w-pvc-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.793311  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Bound
I1109 04:16:33.793320  112476 pv_controller.go:778] updating PersistentVolume[pv-w-pvc-prebound]: phase Bound already set
I1109 04:16:33.793329  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: binding to "pv-w-pvc-prebound"
I1109 04:16:33.793344  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound]: already bound to "pv-w-pvc-prebound"
I1109 04:16:33.793352  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound] status: set phase Bound
I1109 04:16:33.793401  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound] status: phase Bound already set
I1109 04:16:33.793433  112476 pv_controller.go:955] volume "pv-w-pvc-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound"
I1109 04:16:33.793452  112476 pv_controller.go:956] volume "pv-w-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:33.793464  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" status after binding: phase: Bound, bound to: "pv-w-pvc-prebound", bindCompleted: true, boundByController: false
I1109 04:16:33.867688  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.988759ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:33.967542  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.737312ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.067769  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.211978ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.168100  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.454632ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.267733  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.10902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.367763  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.085453ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.467890  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.163299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.570494  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (4.787324ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.667648  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.939677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.767849  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (2.163365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.867544  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.832081ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:34.968763  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.036671ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.068756  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (3.054993ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.173118  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (5.431265ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.175268  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pvc-prebound: (1.559295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.177723  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-prebound: (1.623376ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.179559  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-pvc-prebound: (1.411075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.185021  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:35.185095  112476 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pvc-prebound
I1109 04:16:35.187533  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (1.934439ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.198557  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (18.507697ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.203902  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (4.836123ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.204579  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" deleted
I1109 04:16:35.204621  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 32831
I1109 04:16:35.204656  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:35.204668  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound
I1109 04:16:35.206200  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-prebound: (1.167092ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.206477  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound not found
I1109 04:16:35.206514  112476 pv_controller.go:573] volume "pv-w-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1109 04:16:35.206526  112476 pv_controller.go:775] updating PersistentVolume[pv-w-pvc-prebound]: set phase Released
I1109 04:16:35.209463  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-pvc-prebound/status: (2.60981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.209847  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 34023
I1109 04:16:35.209885  112476 pv_controller.go:796] volume "pv-w-pvc-prebound" entered phase "Released"
I1109 04:16:35.209900  112476 pv_controller.go:1009] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1109 04:16:35.210170  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-pvc-prebound" with version 34023
I1109 04:16:35.210235  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-pvc-prebound]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound (uid: 7e61357e-aedf-4b69-8abf-979d4516b4ec)", boundByController: true
I1109 04:16:35.210250  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound
I1109 04:16:35.210270  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound not found
I1109 04:16:35.210298  112476 pv_controller.go:1009] reclaimVolume[pv-w-pvc-prebound]: policy is Retain, nothing to do
I1109 04:16:35.210661  112476 store.go:231] deletion of /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-w-pvc-prebound failed because of a conflict, going to retry
I1109 04:16:35.213403  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.01662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.213676  112476 pv_controller_base.go:216] volume "pv-w-pvc-prebound" deleted
I1109 04:16:35.213827  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-prebound" was already processed
I1109 04:16:35.223791  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.584375ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.224294  112476 volume_binding_test.go:191] Running test wait cannot bind two
I1109 04:16:35.226754  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.135142ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.231078  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.902114ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.235065  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-1", version 34030
I1109 04:16:35.235107  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:35.235127  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1109 04:16:35.235135  112476 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1109 04:16:35.235425  112476 httplog.go:90] POST /api/v1/persistentvolumes: (3.236211ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.238725  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-1/status: (2.658883ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.239175  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 34031
I1109 04:16:35.239208  112476 pv_controller.go:796] volume "pv-w-cannotbind-1" entered phase "Available"
I1109 04:16:35.239236  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-1" with version 34031
I1109 04:16:35.239252  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-1]: phase: Available, bound to: "", boundByController: false
I1109 04:16:35.239271  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-1]: volume is unused
I1109 04:16:35.239278  112476 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-1]: set phase Available
I1109 04:16:35.239288  112476 pv_controller.go:778] updating PersistentVolume[pv-w-cannotbind-1]: phase Available already set
I1109 04:16:35.240171  112476 httplog.go:90] POST /api/v1/persistentvolumes: (3.583258ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.240496  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-cannotbind-2", version 34032
I1109 04:16:35.240526  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:35.240547  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1109 04:16:35.240555  112476 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1109 04:16:35.246768  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-cannotbind-2/status: (5.974322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.246989  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 34035
I1109 04:16:35.247018  112476 pv_controller.go:796] volume "pv-w-cannotbind-2" entered phase "Available"
I1109 04:16:35.247045  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-cannotbind-2" with version 34035
I1109 04:16:35.247062  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-cannotbind-2]: phase: Available, bound to: "", boundByController: false
I1109 04:16:35.247083  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-cannotbind-2]: volume is unused
I1109 04:16:35.247090  112476 pv_controller.go:775] updating PersistentVolume[pv-w-cannotbind-2]: set phase Available
I1109 04:16:35.247098  112476 pv_controller.go:778] updating PersistentVolume[pv-w-cannotbind-2]: phase Available already set
I1109 04:16:35.247236  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-1", version 34034
I1109 04:16:35.247259  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-1]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:35.247288  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-1]: no volume found
I1109 04:16:35.247312  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-1] status: set phase Pending
I1109 04:16:35.247337  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-1] status: phase Pending already set
I1109 04:16:35.247624  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-cannotbind-1", UID:"89096813-9d76-4f1b-b106-08b8e4bded93", APIVersion:"v1", ResourceVersion:"34034", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:35.247936  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (7.193102ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.250700  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (2.296908ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.251316  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2", version 34036
I1109 04:16:35.251341  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:35.251372  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2]: no volume found
I1109 04:16:35.251391  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2] status: set phase Pending
I1109 04:16:35.251426  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2] status: phase Pending already set
I1109 04:16:35.251470  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-cannotbind-2", UID:"49b4420a-4f59-4c42-9b65-51b962411ef9", APIVersion:"v1", ResourceVersion:"34036", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:35.251944  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (4.165917ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.254597  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (3.081485ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.255618  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.70928ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.256103  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2
I1109 04:16:35.256192  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2
I1109 04:16:35.256701  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2" on node "node-1"
I1109 04:16:35.256813  112476 scheduler_binder.go:725] storage class "wait-k6rx" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2" does not support dynamic provisioning
I1109 04:16:35.256708  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2" on node "node-2"
I1109 04:16:35.256999  112476 scheduler_binder.go:725] storage class "wait-k6rx" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2" does not support dynamic provisioning
I1109 04:16:35.257227  112476 factory.go:632] Unable to schedule volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2: no fit: 0/2 nodes are available: 2 node(s) didn't find available persistent volumes to bind.; waiting
I1109 04:16:35.257342  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2 to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:35.261944  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind-2: (4.266953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.262943  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.38754ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.262997  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind-2/status: (3.458711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59604]
I1109 04:16:35.265872  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind-2: (1.536276ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.266212  112476 generic_scheduler.go:341] Preemption will not help schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2 on any node.
I1109 04:16:35.359840  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-cannotbind-2: (3.95669ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.364195  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-cannotbind-1: (3.726586ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.366882  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-cannotbind-2: (2.077398ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.369065  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-1: (1.562363ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.374749  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-cannotbind-2: (5.203481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.384232  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2
I1109 04:16:35.384319  112476 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-cannotbind-2
I1109 04:16:35.387399  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (11.976793ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.388559  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.718528ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.399681  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-1" deleted
I1109 04:16:35.411341  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-cannotbind-2" deleted
I1109 04:16:35.411389  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (23.344488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.418730  112476 pv_controller_base.go:216] volume "pv-w-cannotbind-1" deleted
I1109 04:16:35.421254  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.232014ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.421952  112476 pv_controller_base.go:216] volume "pv-w-cannotbind-2" deleted
I1109 04:16:35.437097  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (15.177998ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.437904  112476 volume_binding_test.go:191] Running test immediate pv prebound
I1109 04:16:35.442146  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.845713ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.458236  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (14.760685ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.471126  112476 httplog.go:90] POST /api/v1/persistentvolumes: (12.015284ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.473843  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-prebound", version 34071
I1109 04:16:35.473898  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Pending, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 04:16:35.473908  112476 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound
I1109 04:16:35.473916  112476 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1109 04:16:35.491501  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (16.862936ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.491797  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34072
I1109 04:16:35.491828  112476 pv_controller.go:796] volume "pv-i-prebound" entered phase "Available"
I1109 04:16:35.491857  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34072
I1109 04:16:35.491887  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 04:16:35.491898  112476 pv_controller.go:504] synchronizing PersistentVolume[pv-i-prebound]: volume is pre-bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound
I1109 04:16:35.491905  112476 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Available
I1109 04:16:35.491915  112476 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Available already set
I1109 04:16:35.523341  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (51.640648ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.539830  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound", version 34074
I1109 04:16:35.539876  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:35.539919  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: )", boundByController: false
I1109 04:16:35.539936  112476 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.539949  112476 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.539971  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" bound to volume "pv-i-prebound"
I1109 04:16:35.552253  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound
I1109 04:16:35.552277  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound
E1109 04:16:35.552537  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:35.552594  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:35.552636  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:35.559051  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34076
I1109 04:16:35.559096  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.559109  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound
I1109 04:16:35.559128  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:35.559145  112476 pv_controller.go:604] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 04:16:35.559614  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (35.476596ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.562466  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound: (22.155029ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.562832  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34076
I1109 04:16:35.562865  112476 pv_controller.go:860] updating PersistentVolume[pv-i-prebound]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.562879  112476 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1109 04:16:35.567174  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound/status: (13.108086ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35830]
E1109 04:16:35.567506  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:35.567868  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (13.828434ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35834]
I1109 04:16:35.568264  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound
I1109 04:16:35.568278  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound
E1109 04:16:35.568491  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:35.568530  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:35.568563  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:35.568579  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pv-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:35.573026  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34079
I1109 04:16:35.573077  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.573089  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound
I1109 04:16:35.573107  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:35.573123  112476 pv_controller.go:604] synchronizing PersistentVolume[pv-i-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 04:16:35.578555  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (15.377399ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.578914  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34079
I1109 04:16:35.578960  112476 pv_controller.go:796] volume "pv-i-prebound" entered phase "Bound"
I1109 04:16:35.578976  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1109 04:16:35.578995  112476 pv_controller.go:899] volume "pv-i-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.579684  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (10.626223ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.588086  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (34.02413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35832]
E1109 04:16:35.588523  112476 factory.go:673] pod is already present in the backoffQ
I1109 04:16:35.603745  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (34.126632ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35830]
I1109 04:16:35.611200  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-pv-prebound: (31.285433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59606]
I1109 04:16:35.611675  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" with version 34082
I1109 04:16:35.611707  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: bound to "pv-i-prebound"
I1109 04:16:35.611737  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound] status: set phase Bound
I1109 04:16:35.615224  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-pv-prebound/status: (3.134406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.615754  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" with version 34083
I1109 04:16:35.615790  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" entered phase "Bound"
I1109 04:16:35.615810  112476 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.615838  112476 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.615856  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 04:16:35.615899  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" with version 34083
I1109 04:16:35.615919  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 04:16:35.615938  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.615948  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: claim is already correctly bound
I1109 04:16:35.615959  112476 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.615969  112476 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.615990  112476 pv_controller.go:839] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.616001  112476 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1109 04:16:35.616013  112476 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1109 04:16:35.616024  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1109 04:16:35.616043  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1109 04:16:35.616062  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound] status: set phase Bound
I1109 04:16:35.616081  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound] status: phase Bound already set
I1109 04:16:35.616094  112476 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.616117  112476 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.616131  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 04:16:35.616155  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" with version 34083
I1109 04:16:35.616165  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 04:16:35.616185  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: volume "pv-i-prebound" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.616212  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: claim is already correctly bound
I1109 04:16:35.616228  112476 pv_controller.go:929] binding volume "pv-i-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.616237  112476 pv_controller.go:827] updating PersistentVolume[pv-i-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.616266  112476 pv_controller.go:839] updating PersistentVolume[pv-i-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.616283  112476 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Bound
I1109 04:16:35.616292  112476 pv_controller.go:778] updating PersistentVolume[pv-i-prebound]: phase Bound already set
I1109 04:16:35.616300  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: binding to "pv-i-prebound"
I1109 04:16:35.616317  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound]: already bound to "pv-i-prebound"
I1109 04:16:35.616326  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound] status: set phase Bound
I1109 04:16:35.616363  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound] status: phase Bound already set
I1109 04:16:35.616384  112476 pv_controller.go:955] volume "pv-i-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound"
I1109 04:16:35.616445  112476 pv_controller.go:956] volume "pv-i-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:35.616469  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" status after binding: phase: Bound, bound to: "pv-i-prebound", bindCompleted: true, boundByController: true
I1109 04:16:35.662858  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.29587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.763596  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.969159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.862812  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.298984ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:35.962691  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.133108ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.065052  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.005488ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.162802  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.270425ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.262435  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (1.934289ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.365323  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (4.800135ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.462154  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (1.636879ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.562380  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (1.88102ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.584178  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound
I1109 04:16:36.584218  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound
I1109 04:16:36.584520  112476 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound" match with Node "node-1"
I1109 04:16:36.584585  112476 scheduler_binder.go:653] PersistentVolume "pv-i-prebound", Node "node-2" mismatch for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound": No matching NodeSelectorTerms
I1109 04:16:36.584703  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound", node "node-1"
I1109 04:16:36.584725  112476 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound", node "node-1": all PVCs bound and nothing to do
I1109 04:16:36.584807  112476 factory.go:698] Attempting to bind pod-i-pv-prebound to node-1
I1109 04:16:36.587877  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound/binding: (2.654244ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.589015  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:16:36.592670  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.267639ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.662631  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pv-prebound: (2.200193ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.665919  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-pv-prebound: (2.734685ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.668026  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-prebound: (1.573774ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.681646  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (12.890189ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.694474  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (12.189063ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.696038  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" deleted
I1109 04:16:36.696289  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34079
I1109 04:16:36.697205  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:36.697236  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound
I1109 04:16:36.699197  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound not found
I1109 04:16:36.699224  112476 pv_controller.go:573] volume "pv-i-prebound" is released and reclaim policy "Retain" will be executed
I1109 04:16:36.699239  112476 pv_controller.go:775] updating PersistentVolume[pv-i-prebound]: set phase Released
I1109 04:16:36.703384  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-prebound/status: (3.111787ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35834]
I1109 04:16:36.704125  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34159
I1109 04:16:36.704163  112476 pv_controller.go:796] volume "pv-i-prebound" entered phase "Released"
I1109 04:16:36.704178  112476 pv_controller.go:1009] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1109 04:16:36.704203  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-prebound" with version 34159
I1109 04:16:36.704227  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-prebound]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound (uid: 50e52c57-0d0f-4f3a-adf5-99b731076450)", boundByController: false
I1109 04:16:36.704239  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound
I1109 04:16:36.704261  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound not found
I1109 04:16:36.704267  112476 pv_controller.go:1009] reclaimVolume[pv-i-prebound]: policy is Retain, nothing to do
I1109 04:16:36.710539  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (12.763548ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.710952  112476 pv_controller_base.go:216] volume "pv-i-prebound" deleted
I1109 04:16:36.711011  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-pv-prebound" was already processed
I1109 04:16:36.719440  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.288683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.719636  112476 volume_binding_test.go:191] Running test immediate cannot bind
I1109 04:16:36.722646  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.676474ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.725376  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.981955ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.729012  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-cannotbind", version 34169
I1109 04:16:36.729037  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-cannotbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:36.729056  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-cannotbind]: no volume found
I1109 04:16:36.729063  112476 pv_controller.go:1324] provisionClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-cannotbind]: started
E1109 04:16:36.729088  112476 pv_controller.go:1329] error finding provisioning plugin for claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-cannotbind: no volume plugin matched
I1109 04:16:36.729325  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-i-cannotbind", UID:"8db8cb23-51f8-4e13-a224-c87acb002c35", APIVersion:"v1", ResourceVersion:"34169", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1109 04:16:36.729688  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (3.558307ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.732779  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.13841ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35834]
I1109 04:16:36.732860  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (2.638445ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.733648  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind
I1109 04:16:36.733663  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind
E1109 04:16:36.733816  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:36.733831  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:36.733891  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:36.733927  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:36.737385  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-cannotbind: (2.052283ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.738029  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.086418ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
E1109 04:16:36.738161  112476 factory.go:673] pod is already present in the activeQ
I1109 04:16:36.739700  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-cannotbind/status: (5.418704ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35834]
E1109 04:16:36.740157  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:36.740359  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind
I1109 04:16:36.740379  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind
E1109 04:16:36.740692  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:36.740694  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:36.740745  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:36.740776  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:36.740791  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-cannotbind": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:36.744115  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.925367ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.744839  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-cannotbind: (2.904461ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.835906  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-cannotbind: (2.041779ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.838277  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-cannotbind: (1.801416ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.846400  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind
I1109 04:16:36.846472  112476 scheduler.go:607] Skip schedule deleting pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-cannotbind
I1109 04:16:36.848023  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (8.315698ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.850512  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.601973ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.855318  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (6.703921ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.855672  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-cannotbind" deleted
I1109 04:16:36.857798  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (1.724318ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.866509  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (8.293227ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.867025  112476 volume_binding_test.go:191] Running test immediate pvc prebound
I1109 04:16:36.869158  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.715956ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.871687  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.894344ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.874227  112476 httplog.go:90] POST /api/v1/persistentvolumes: (2.004261ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.875694  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-pvc-prebound", version 34217
I1109 04:16:36.875859  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:36.875964  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1109 04:16:36.876040  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1109 04:16:36.880172  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.669986ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.880452  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound", version 34219
I1109 04:16:36.880488  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:36.880499  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34220
I1109 04:16:36.880504  112476 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1109 04:16:36.880524  112476 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Available"
I1109 04:16:36.880557  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34220
I1109 04:16:36.880580  112476 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Available, bound to: "", boundByController: false
I1109 04:16:36.880593  112476 pv_controller.go:368] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume is unbound, binding
I1109 04:16:36.880605  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "", boundByController: false
I1109 04:16:36.880628  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is unused
I1109 04:16:36.880633  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Available
I1109 04:16:36.880639  112476 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Available already set
I1109 04:16:36.880637  112476 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:36.880654  112476 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:36.880677  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" bound to volume "pv-i-pvc-prebound"
I1109 04:16:36.880968  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (5.483118ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.886103  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34222
I1109 04:16:36.886170  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:36.886184  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound
I1109 04:16:36.886205  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:36.886218  112476 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1109 04:16:36.886238  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 04:16:36.886246  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (4.768768ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.886286  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
I1109 04:16:36.886304  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
E1109 04:16:36.886532  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:36.886577  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:36.886604  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:36.888173  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound: (7.220269ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.888463  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34222
I1109 04:16:36.888499  112476 pv_controller.go:860] updating PersistentVolume[pv-i-pvc-prebound]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:36.888511  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 04:16:36.890251  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.669273ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.890651  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34225
I1109 04:16:36.890677  112476 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Bound"
I1109 04:16:36.890703  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34225
I1109 04:16:36.890727  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:36.890739  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound
I1109 04:16:36.890766  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:36.890779  112476 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1109 04:16:36.890789  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 04:16:36.890797  112476 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1109 04:16:36.891169  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (3.436768ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:36.891713  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.992187ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I1109 04:16:36.892041  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound/status: (4.753832ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36402]
E1109 04:16:36.892404  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:36.892503  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
I1109 04:16:36.892513  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
E1109 04:16:36.892696  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:36.892731  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:36.892754  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:36.892757  112476 store.go:365] GuaranteedUpdate of /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
E1109 04:16:36.892772  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:36.894109  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (5.329636ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:36.894404  112476 pv_controller.go:788] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound failed: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:36.894453  112476 pv_controller.go:938] error binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound": failed saving the volume status: Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:36.894492  112476 pv_controller_base.go:251] could not sync claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound": Operation cannot be fulfilled on persistentvolumes "pv-i-pvc-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:36.897886  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.798256ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:36.898257  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (4.369063ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I1109 04:16:36.989559  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.444637ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.089207  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.127349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.189188  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.082388ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.289175  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.046457ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.389174  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.049766ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.489574  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.422365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.590258  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (3.185152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.690224  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.985212ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.789035  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.879399ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.889078  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.932271ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:37.988986  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.849664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.089007  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.899343ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.189550  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.359847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.289341  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.965541ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.388855  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.784619ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.488932  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.90232ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.589365  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.209905ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.689513  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.428942ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.791468  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.221973ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.889511  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.233672ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:38.989942  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.798875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.088848  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.81906ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.189392  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.183617ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.289091  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.921432ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.389333  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.242023ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.489169  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.077806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.589035  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.941945ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.690057  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.96421ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.789033  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.98521ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.889175  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.080775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:39.989321  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.237638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.089863  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.847855ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.192149  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.272557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.289809  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.655783ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.389006  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.925522ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.489643  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.32298ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.589033  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.93486ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.688913  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.807512ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.789320  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.227844ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.888915  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.766306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:40.989018  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.916394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.088753  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.763229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.189111  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.036196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.288898  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.8041ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.389049  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.925539ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.489013  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.881849ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.588998  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.900115ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.689301  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.204234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.792965  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (5.849982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.891584  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.504592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:41.988962  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.878501ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.089537  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.450498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.189649  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.178768ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.289070  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.949953ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.389274  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.085407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.491874  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.704311ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.589021  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.928409ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.689384  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.046288ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.789127  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.992536ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.889400  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.108533ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:42.989008  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.915478ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.089143  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.971127ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.190266  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (3.148911ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.289528  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.393035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.389290  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.189335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.490935  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.042538ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.511790  112476 httplog.go:90] GET /api/v1/namespaces/default: (2.028847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.514105  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.858051ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.515870  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.267472ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.589084  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.035471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.689149  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.053355ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.789388  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.191732ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.889436  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.238229ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:43.989952  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.964534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.089236  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.118466ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.189163  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.97761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.289051  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.931381ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.389483  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.409851ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.489548  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.373119ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.592266  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (5.200292ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.688743  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.662915ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.789037  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.925289ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.888938  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.735952ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:44.989986  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.879345ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.091838  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.718931ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.189612  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.414367ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.289803  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.650876ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.388775  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.699694ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.489334  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.179117ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.591844  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.650709ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.689553  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.297808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.789169  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.039056ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.888859  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.747806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:45.989458  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.249662ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.089920  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.831882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.189120  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.046405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.289275  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.112497ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.389181  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.979476ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.489194  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.035481ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.589183  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.106636ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.689122  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.930524ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.789252  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.106172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.889118  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.996842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:46.988851  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.775339ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.088910  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.846048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.189508  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.34722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.289346  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.287413ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.389362  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.172969ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.489640  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.504227ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.589110  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.04129ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.689345  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.178374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.789438  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.302871ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.889489  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.250622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:47.993142  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (6.051991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.088600  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.469641ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.188951  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.915208ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.291699  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (4.018448ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.389021  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.871985ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.489541  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.357743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.588922  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.835682ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.690081  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.960498ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.784696  112476 pv_controller_base.go:426] resyncing PV controller
I1109 04:16:48.784808  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34225
I1109 04:16:48.784855  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:48.784867  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound
I1109 04:16:48.784886  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound found: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:48.784901  112476 pv_controller.go:617] synchronizing PersistentVolume[pv-i-pvc-prebound]: all is bound
I1109 04:16:48.784910  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 04:16:48.784917  112476 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1109 04:16:48.784935  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" with version 34219
I1109 04:16:48.784946  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: phase: Pending, bound to: "pv-i-pvc-prebound", bindCompleted: false, boundByController: false
I1109 04:16:48.784959  112476 pv_controller.go:345] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested
I1109 04:16:48.784973  112476 pv_controller.go:364] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume "pv-i-pvc-prebound" requested and found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:48.784985  112476 pv_controller.go:388] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume already bound, finishing the binding
I1109 04:16:48.784993  112476 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.785002  112476 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.785028  112476 pv_controller.go:839] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.785037  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 04:16:48.785043  112476 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1109 04:16:48.785051  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1109 04:16:48.785063  112476 pv_controller.go:899] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.788307  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
I1109 04:16:48.788331  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
E1109 04:16:48.788504  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:48.788562  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:48.788591  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:48.788610  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-pvc-prebound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:48.788692  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-prebound: (3.154493ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:48.789084  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.055958ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:48.789401  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" with version 35104
I1109 04:16:48.789460  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: bound to "pv-i-pvc-prebound"
I1109 04:16:48.789471  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound] status: set phase Bound
I1109 04:16:48.790665  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.278727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:48.792825  112476 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events/pod-i-pvc-prebound.15d563748281a4bb: (3.380437ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38844]
I1109 04:16:48.794663  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-prebound/status: (4.964606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38846]
I1109 04:16:48.794932  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" with version 35106
I1109 04:16:48.794965  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" entered phase "Bound"
I1109 04:16:48.794984  112476 pv_controller.go:955] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.795014  112476 pv_controller.go:956] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:48.795038  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1109 04:16:48.795075  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" with version 35106
I1109 04:16:48.795088  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1109 04:16:48.795105  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: volume "pv-i-pvc-prebound" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:48.795116  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: claim is already correctly bound
I1109 04:16:48.795137  112476 pv_controller.go:929] binding volume "pv-i-pvc-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.795149  112476 pv_controller.go:827] updating PersistentVolume[pv-i-pvc-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.795170  112476 pv_controller.go:839] updating PersistentVolume[pv-i-pvc-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.795184  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Bound
I1109 04:16:48.795193  112476 pv_controller.go:778] updating PersistentVolume[pv-i-pvc-prebound]: phase Bound already set
I1109 04:16:48.795203  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: binding to "pv-i-pvc-prebound"
I1109 04:16:48.795226  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound]: already bound to "pv-i-pvc-prebound"
I1109 04:16:48.795238  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound] status: set phase Bound
I1109 04:16:48.795256  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound] status: phase Bound already set
I1109 04:16:48.795269  112476 pv_controller.go:955] volume "pv-i-pvc-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound"
I1109 04:16:48.795288  112476 pv_controller.go:956] volume "pv-i-pvc-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:48.795301  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" status after binding: phase: Bound, bound to: "pv-i-pvc-prebound", bindCompleted: true, boundByController: false
I1109 04:16:48.888692  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.525628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:48.988947  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.760332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.089135  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.961711ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.189179  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.07556ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.289261  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.059559ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.388825  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.742944ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.489753  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.594279ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.588950  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.926222ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.689341  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.122633ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.789251  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.958095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.889950  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.74991ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:49.989198  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.027664ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.089762  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.521706ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.189288  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.182628ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.289353  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (2.179037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.388842  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.735899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.489142  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.979344ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.586877  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
I1109 04:16:50.586917  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound
I1109 04:16:50.587144  112476 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound" match with Node "node-1"
I1109 04:16:50.587214  112476 scheduler_binder.go:653] PersistentVolume "pv-i-pvc-prebound", Node "node-2" mismatch for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound": No matching NodeSelectorTerms
I1109 04:16:50.587285  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound", node "node-1"
I1109 04:16:50.587296  112476 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound", node "node-1": all PVCs bound and nothing to do
I1109 04:16:50.587360  112476 factory.go:698] Attempting to bind pod-i-pvc-prebound to node-1
I1109 04:16:50.590382  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (1.454126ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.593524  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound/binding: (5.318384ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.593861  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-pvc-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:16:50.596388  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.186953ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.690367  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-pvc-prebound: (3.285531ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.694630  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-prebound: (2.353335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.696838  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-pvc-prebound: (1.789033ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.709645  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (12.368331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.719048  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (8.80751ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.719267  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" deleted
I1109 04:16:50.719321  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 34225
I1109 04:16:50.719363  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:50.719376  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound
I1109 04:16:50.721244  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-prebound: (1.253524ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.721536  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound not found
I1109 04:16:50.721556  112476 pv_controller.go:573] volume "pv-i-pvc-prebound" is released and reclaim policy "Retain" will be executed
I1109 04:16:50.721568  112476 pv_controller.go:775] updating PersistentVolume[pv-i-pvc-prebound]: set phase Released
I1109 04:16:50.725766  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-pvc-prebound/status: (3.370689ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.726038  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 35422
I1109 04:16:50.726080  112476 pv_controller.go:796] volume "pv-i-pvc-prebound" entered phase "Released"
I1109 04:16:50.726094  112476 pv_controller.go:1009] reclaimVolume[pv-i-pvc-prebound]: policy is Retain, nothing to do
I1109 04:16:50.726815  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-pvc-prebound" with version 35422
I1109 04:16:50.726868  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-pvc-prebound]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound (uid: 9749e38a-82ee-49ee-9605-e94af5ab2dd0)", boundByController: true
I1109 04:16:50.726882  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-pvc-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound
I1109 04:16:50.726902  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-pvc-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound not found
I1109 04:16:50.726911  112476 pv_controller.go:1009] reclaimVolume[pv-i-pvc-prebound]: policy is Retain, nothing to do
I1109 04:16:50.727664  112476 store.go:231] deletion of /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-i-pvc-prebound failed because of a conflict, going to retry
I1109 04:16:50.730181  112476 pv_controller_base.go:216] volume "pv-i-pvc-prebound" deleted
I1109 04:16:50.730224  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-prebound" was already processed
I1109 04:16:50.730488  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.958396ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.742689  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (11.297407ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.742938  112476 volume_binding_test.go:191] Running test wait pv prebound
I1109 04:16:50.745617  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.186559ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.749668  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.520439ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.753483  112476 httplog.go:90] POST /api/v1/persistentvolumes: (3.187312ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.753814  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-prebound", version 35440
I1109 04:16:50.753862  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Pending, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: )", boundByController: false
I1109 04:16:50.753879  112476 pv_controller.go:504] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound
I1109 04:16:50.753888  112476 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Available
I1109 04:16:50.756558  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.383922ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.756826  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35441
I1109 04:16:50.756854  112476 pv_controller.go:796] volume "pv-w-prebound" entered phase "Available"
I1109 04:16:50.756876  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35441
I1109 04:16:50.756893  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: )", boundByController: false
I1109 04:16:50.756902  112476 pv_controller.go:504] synchronizing PersistentVolume[pv-w-prebound]: volume is pre-bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound
I1109 04:16:50.756907  112476 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Available
I1109 04:16:50.756913  112476 pv_controller.go:778] updating PersistentVolume[pv-w-prebound]: phase Available already set
I1109 04:16:50.758925  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (4.904063ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.759191  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound", version 35442
I1109 04:16:50.759231  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:50.759271  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: )", boundByController: false
I1109 04:16:50.759291  112476 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.759305  112476 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.759328  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1109 04:16:50.762058  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.399688ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.762344  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35444
I1109 04:16:50.762381  112476 pv_controller.go:860] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.762394  112476 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1109 04:16:50.762787  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (3.274597ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.763116  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound
I1109 04:16:50.763138  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound
I1109 04:16:50.763378  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" on node "node-2"
I1109 04:16:50.763422  112476 scheduler_binder.go:725] storage class "wait-qbtg" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" does not support dynamic provisioning
I1109 04:16:50.763379  112476 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound" on node "node-1"
I1109 04:16:50.762321  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35444
I1109 04:16:50.763519  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound", node "node-1"
I1109 04:16:50.763554  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:50.763580  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound
I1109 04:16:50.763606  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:50.763607  112476 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound", node "node-1"
I1109 04:16:50.763627  112476 scheduler_binder.go:404] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1109 04:16:50.763634  112476 pv_controller.go:604] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 04:16:50.764975  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.311161ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.765310  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35447
I1109 04:16:50.765349  112476 pv_controller.go:796] volume "pv-w-prebound" entered phase "Bound"
I1109 04:16:50.765365  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1109 04:16:50.765383  112476 pv_controller.go:899] volume "pv-w-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.765604  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35447
I1109 04:16:50.765642  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:50.765653  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound
I1109 04:16:50.765671  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:50.765685  112476 pv_controller.go:604] synchronizing PersistentVolume[pv-w-prebound]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it
I1109 04:16:50.765881  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (1.904803ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.766097  112476 scheduler_binder.go:407] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" failed: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
I1109 04:16:50.766127  112476 scheduler_assume_cache.go:337] Restored v1.PersistentVolume "pv-w-prebound"
I1109 04:16:50.766151  112476 scheduler.go:519] Failed to bind volumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound": Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again
E1109 04:16:50.766188  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound: Operation cannot be fulfilled on persistentvolumes "pv-w-prebound": the object has been modified; please apply your changes to the latest version and try again; retrying
I1109 04:16:50.766216  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound to (PodScheduled==False, Reason=VolumeBindingFailed)
I1109 04:16:50.770331  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (3.292699ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:50.770803  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound/status: (4.24659ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.771256  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound
I1109 04:16:50.771277  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound
I1109 04:16:50.771330  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (4.280435ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39272]
I1109 04:16:50.771513  112476 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound" on node "node-1"
I1109 04:16:50.771594  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" on node "node-2"
I1109 04:16:50.771617  112476 scheduler_binder.go:725] storage class "wait-qbtg" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" does not support dynamic provisioning
I1109 04:16:50.771678  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound", node "node-1"
I1109 04:16:50.771778  112476 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound", node "node-1"
I1109 04:16:50.771800  112476 scheduler_binder.go:404] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" bound to volume "pv-w-prebound"
I1109 04:16:50.774245  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-pv-prebound: (8.544865ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.774574  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" with version 35452
I1109 04:16:50.774610  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: bound to "pv-w-prebound"
I1109 04:16:50.774623  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound] status: set phase Bound
I1109 04:16:50.774835  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound: (2.498899ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.775149  112476 scheduler_binder.go:410] updating PersistentVolume[pv-w-prebound]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.780547  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-pv-prebound/status: (5.622431ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36404]
I1109 04:16:50.780877  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" with version 35454
I1109 04:16:50.780915  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" entered phase "Bound"
I1109 04:16:50.780936  112476 pv_controller.go:955] volume "pv-w-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.780964  112476 pv_controller.go:956] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:50.780984  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1109 04:16:50.781028  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" with version 35454
I1109 04:16:50.781040  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1109 04:16:50.781061  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: volume "pv-w-prebound" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:50.781071  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: claim is already correctly bound
I1109 04:16:50.781081  112476 pv_controller.go:929] binding volume "pv-w-prebound" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.781091  112476 pv_controller.go:827] updating PersistentVolume[pv-w-prebound]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.781111  112476 pv_controller.go:839] updating PersistentVolume[pv-w-prebound]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.781121  112476 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Bound
I1109 04:16:50.781130  112476 pv_controller.go:778] updating PersistentVolume[pv-w-prebound]: phase Bound already set
I1109 04:16:50.781139  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: binding to "pv-w-prebound"
I1109 04:16:50.781163  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound]: already bound to "pv-w-prebound"
I1109 04:16:50.781176  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound] status: set phase Bound
I1109 04:16:50.781196  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound] status: phase Bound already set
I1109 04:16:50.781207  112476 pv_controller.go:955] volume "pv-w-prebound" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound"
I1109 04:16:50.781226  112476 pv_controller.go:956] volume "pv-w-prebound" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:50.781240  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" status after binding: phase: Bound, bound to: "pv-w-prebound", bindCompleted: true, boundByController: true
I1109 04:16:50.865982  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (2.208186ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:50.965894  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (2.216643ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.065646  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (1.956847ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.166091  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (2.378995ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.265667  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (1.951278ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.368526  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (4.833587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.466175  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (2.164807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.567978  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (4.255735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.586968  112476 cache.go:656] Couldn't expire cache for pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound. Binding is still in progress.
I1109 04:16:51.666196  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (2.1996ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.766050  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (2.429046ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.775458  112476 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound" are bound
I1109 04:16:51.775583  112476 factory.go:698] Attempting to bind pod-w-pv-prebound to node-1
I1109 04:16:51.778795  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound/binding: (2.863393ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.779086  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-pv-prebound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:16:51.781499  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.013915ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.865581  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-pv-prebound: (1.864929ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.867316  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-pv-prebound: (1.242274ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.868760  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-prebound: (1.002988ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.876056  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (6.828069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.880397  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (3.940433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.880710  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" deleted
I1109 04:16:51.880758  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35447
I1109 04:16:51.880792  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:51.880803  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound
I1109 04:16:51.880825  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound not found
I1109 04:16:51.880845  112476 pv_controller.go:573] volume "pv-w-prebound" is released and reclaim policy "Retain" will be executed
I1109 04:16:51.881019  112476 pv_controller.go:775] updating PersistentVolume[pv-w-prebound]: set phase Released
I1109 04:16:51.884090  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-prebound/status: (2.77406ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.884337  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35882
I1109 04:16:51.884378  112476 pv_controller.go:796] volume "pv-w-prebound" entered phase "Released"
I1109 04:16:51.884391  112476 pv_controller.go:1009] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1109 04:16:51.884434  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-prebound" with version 35882
I1109 04:16:51.884460  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-prebound]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound (uid: ddd15ff0-5421-4e10-a05f-a814c5bf1508)", boundByController: false
I1109 04:16:51.884472  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-prebound]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound
I1109 04:16:51.884492  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-prebound]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound not found
I1109 04:16:51.884499  112476 pv_controller.go:1009] reclaimVolume[pv-w-prebound]: policy is Retain, nothing to do
I1109 04:16:51.887072  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (6.310002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.887919  112476 pv_controller_base.go:216] volume "pv-w-prebound" deleted
I1109 04:16:51.887954  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-pv-prebound" was already processed
I1109 04:16:51.893745  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (6.233768ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.894214  112476 volume_binding_test.go:191] Running test wait can bind two
I1109 04:16:51.895858  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.406821ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.897687  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (1.538111ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.899741  112476 httplog.go:90] POST /api/v1/persistentvolumes: (1.63813ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.900149  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-2", version 35888
I1109 04:16:51.900208  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:51.900228  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1109 04:16:51.900239  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1109 04:16:51.902259  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.666266ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.902567  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35890
I1109 04:16:51.902597  112476 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Available"
I1109 04:16:51.902614  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-3", version 35889
I1109 04:16:51.902625  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:51.902639  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1109 04:16:51.902644  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1109 04:16:51.903664  112476 httplog.go:90] POST /api/v1/persistentvolumes: (2.314154ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.904578  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.644542ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.904765  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35891
I1109 04:16:51.904787  112476 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Available"
I1109 04:16:51.904808  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35890
I1109 04:16:51.904820  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "", boundByController: false
I1109 04:16:51.904835  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-2]: volume is unused
I1109 04:16:51.904840  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Available
I1109 04:16:51.904848  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-2]: phase Available already set
I1109 04:16:51.904857  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35891
I1109 04:16:51.904865  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "", boundByController: false
I1109 04:16:51.904878  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-3]: volume is unused
I1109 04:16:51.904881  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Available
I1109 04:16:51.904887  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-3]: phase Available already set
I1109 04:16:51.916156  112476 httplog.go:90] POST /api/v1/persistentvolumes: (12.147395ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.916461  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-5", version 35892
I1109 04:16:51.916500  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:51.916522  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1109 04:16:51.916530  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1109 04:16:51.922793  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (5.766499ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.923100  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2", version 35893
I1109 04:16:51.923135  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.923179  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: no volume found
I1109 04:16:51.923204  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2] status: set phase Pending
I1109 04:16:51.923222  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2] status: phase Pending already set
I1109 04:16:51.923301  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-canbind-2", UID:"33366f29-05f7-4d13-8d71-e1af4dc91938", APIVersion:"v1", ResourceVersion:"35893", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:51.923372  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-5/status: (6.483735ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.923668  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 35894
I1109 04:16:51.923786  112476 pv_controller.go:796] volume "pv-w-canbind-5" entered phase "Available"
I1109 04:16:51.923880  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-5" with version 35894
I1109 04:16:51.923947  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-5]: phase: Available, bound to: "", boundByController: false
I1109 04:16:51.924019  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-5]: volume is unused
I1109 04:16:51.924071  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-5]: set phase Available
I1109 04:16:51.924145  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-5]: phase Available already set
I1109 04:16:51.926289  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.588065ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.926314  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (2.047415ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I1109 04:16:51.926652  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3", version 35896
I1109 04:16:51.926681  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.926710  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: no volume found
I1109 04:16:51.926729  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3] status: set phase Pending
I1109 04:16:51.926743  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3] status: phase Pending already set
I1109 04:16:51.926802  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-canbind-3", UID:"2e27f852-1b4b-4601-b87b-bdcef783cd08", APIVersion:"v1", ResourceVersion:"35896", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:51.928691  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (1.607446ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.931364  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (4.493695ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.931895  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2
I1109 04:16:51.931916  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2
I1109 04:16:51.932167  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" on node "node-1"
I1109 04:16:51.932189  112476 scheduler_binder.go:725] storage class "wait-vbtk" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" does not support dynamic provisioning
I1109 04:16:51.932309  112476 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2" on node "node-2"
I1109 04:16:51.932384  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2", node "node-2"
I1109 04:16:51.932460  112476 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-3", version 35891
I1109 04:16:51.932482  112476 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-2", version 35890
I1109 04:16:51.932559  112476 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2", node "node-2"
I1109 04:16:51.932576  112476 scheduler_binder.go:404] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" bound to volume "pv-w-canbind-3"
I1109 04:16:51.935199  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3: (2.26722ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.935479  112476 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-3]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.935510  112476 scheduler_binder.go:404] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" bound to volume "pv-w-canbind-2"
I1109 04:16:51.935545  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35899
I1109 04:16:51.935581  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:51.935593  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2
I1109 04:16:51.935611  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.935627  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:51.935660  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" with version 35893
I1109 04:16:51.935676  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.935724  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: volume "pv-w-canbind-3" found: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:51.935733  112476 pv_controller.go:929] binding volume "pv-w-canbind-3" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.935742  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.935756  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.935782  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1109 04:16:51.937785  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (1.803592ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.938045  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35900
I1109 04:16:51.938075  112476 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Bound"
I1109 04:16:51.938090  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: binding to "pv-w-canbind-3"
I1109 04:16:51.938111  112476 pv_controller.go:899] volume "pv-w-canbind-3" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.938128  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35900
I1109 04:16:51.938162  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:51.938182  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2
I1109 04:16:51.938201  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.938215  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-3]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:51.938598  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2: (2.627687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:51.938796  112476 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-2]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.938866  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35901
I1109 04:16:51.938905  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:51.938921  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3
I1109 04:16:51.938938  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.938947  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:51.940738  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-2: (2.406234ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.940976  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" with version 35902
I1109 04:16:51.941021  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: bound to "pv-w-canbind-3"
I1109 04:16:51.941032  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2] status: set phase Bound
I1109 04:16:51.942961  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-2/status: (1.737349ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.943301  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" with version 35903
I1109 04:16:51.943338  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" entered phase "Bound"
I1109 04:16:51.943359  112476 pv_controller.go:955] volume "pv-w-canbind-3" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.943377  112476 pv_controller.go:956] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:51.943388  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1109 04:16:51.943432  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" with version 35896
I1109 04:16:51.943444  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.943472  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: volume "pv-w-canbind-2" found: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:51.943484  112476 pv_controller.go:929] binding volume "pv-w-canbind-2" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.943492  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.943507  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.943515  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1109 04:16:51.945642  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (1.79963ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.945900  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35904
I1109 04:16:51.945935  112476 pv_controller.go:796] volume "pv-w-canbind-2" entered phase "Bound"
I1109 04:16:51.945949  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: binding to "pv-w-canbind-2"
I1109 04:16:51.945964  112476 pv_controller.go:899] volume "pv-w-canbind-2" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.945989  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35904
I1109 04:16:51.946013  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:51.946034  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3
I1109 04:16:51.946047  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:51.946057  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:51.948095  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-3: (1.823111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.948400  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" with version 35905
I1109 04:16:51.948455  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: bound to "pv-w-canbind-2"
I1109 04:16:51.948489  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3] status: set phase Bound
I1109 04:16:51.950899  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-3/status: (2.09916ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:51.951235  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" with version 35906
I1109 04:16:51.951456  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" entered phase "Bound"
I1109 04:16:51.951569  112476 pv_controller.go:955] volume "pv-w-canbind-2" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.951663  112476 pv_controller.go:956] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:51.951767  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1109 04:16:51.951908  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" with version 35903
I1109 04:16:51.951987  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1109 04:16:51.952073  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: volume "pv-w-canbind-3" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:51.952146  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: claim is already correctly bound
I1109 04:16:51.952208  112476 pv_controller.go:929] binding volume "pv-w-canbind-3" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.952275  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-3]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.952355  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-3]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.952452  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Bound
I1109 04:16:51.952525  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-3]: phase Bound already set
I1109 04:16:51.952590  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: binding to "pv-w-canbind-3"
I1109 04:16:51.952673  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2]: already bound to "pv-w-canbind-3"
I1109 04:16:51.952754  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2] status: set phase Bound
I1109 04:16:51.952948  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2] status: phase Bound already set
I1109 04:16:51.953102  112476 pv_controller.go:955] volume "pv-w-canbind-3" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2"
I1109 04:16:51.953221  112476 pv_controller.go:956] volume "pv-w-canbind-3" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:51.953306  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" status after binding: phase: Bound, bound to: "pv-w-canbind-3", bindCompleted: true, boundByController: true
I1109 04:16:51.953399  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" with version 35906
I1109 04:16:51.953513  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1109 04:16:51.953590  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: volume "pv-w-canbind-2" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:51.953665  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: claim is already correctly bound
I1109 04:16:51.953728  112476 pv_controller.go:929] binding volume "pv-w-canbind-2" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.953795  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-2]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.953882  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-2]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.953950  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Bound
I1109 04:16:51.954024  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-2]: phase Bound already set
I1109 04:16:51.954086  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: binding to "pv-w-canbind-2"
I1109 04:16:51.954157  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3]: already bound to "pv-w-canbind-2"
I1109 04:16:51.954230  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3] status: set phase Bound
I1109 04:16:51.954302  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3] status: phase Bound already set
I1109 04:16:51.954398  112476 pv_controller.go:955] volume "pv-w-canbind-2" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3"
I1109 04:16:51.954504  112476 pv_controller.go:956] volume "pv-w-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:51.954575  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" status after binding: phase: Bound, bound to: "pv-w-canbind-2", bindCompleted: true, boundByController: true
I1109 04:16:52.034257  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.055095ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.134145  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (1.976207ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.234330  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.03067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.334399  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.175761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.434813  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.162462ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.536460  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (4.222775ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.587211  112476 cache.go:656] Couldn't expire cache for pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2. Binding is still in progress.
I1109 04:16:52.634036  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (1.878338ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.734009  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (1.77807ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.834915  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.625074ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.934355  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.060384ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.939145  112476 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2" are bound
I1109 04:16:52.939232  112476 factory.go:698] Attempting to bind pod-w-canbind-2 to node-2
I1109 04:16:52.942306  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2/binding: (2.643839ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:52.942602  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind-2 is bound successfully on node "node-2", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:16:52.945199  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.198584ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.034675  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-w-canbind-2: (2.424687ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.037120  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-2: (1.707365ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.039001  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-3: (1.421638ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.041021  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-2: (1.604445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.042924  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-3: (1.422035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.045144  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-5: (1.078631ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.052174  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (6.689456ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.058235  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" deleted
I1109 04:16:53.058291  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35900
I1109 04:16:53.058322  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:53.058330  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2
I1109 04:16:53.060476  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-2: (1.720968ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.060776  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 not found
I1109 04:16:53.060805  112476 pv_controller.go:573] volume "pv-w-canbind-3" is released and reclaim policy "Retain" will be executed
I1109 04:16:53.060818  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-3]: set phase Released
I1109 04:16:53.062527  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (9.541221ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.062782  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" deleted
I1109 04:16:53.064574  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-3/status: (3.428833ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.064884  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35933
I1109 04:16:53.064919  112476 pv_controller.go:796] volume "pv-w-canbind-3" entered phase "Released"
I1109 04:16:53.064933  112476 pv_controller.go:1009] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I1109 04:16:53.064959  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-2" with version 35904
I1109 04:16:53.064983  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-2]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 (uid: 2e27f852-1b4b-4601-b87b-bdcef783cd08)", boundByController: true
I1109 04:16:53.064998  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-2]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3
I1109 04:16:53.066252  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-3: (1.025483ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.066492  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-2]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3 not found
I1109 04:16:53.066514  112476 pv_controller.go:573] volume "pv-w-canbind-2" is released and reclaim policy "Retain" will be executed
I1109 04:16:53.066540  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-2]: set phase Released
I1109 04:16:53.069058  112476 store.go:365] GuaranteedUpdate of /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-w-canbind-2 failed because of a conflict, going to retry
I1109 04:16:53.069256  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-2/status: (2.418688ms) 409 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.069507  112476 pv_controller.go:788] updating PersistentVolume[pv-w-canbind-2]: set phase Released failed: Operation cannot be fulfilled on persistentvolumes "pv-w-canbind-2": StorageError: invalid object, Code: 4, Key: /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-w-canbind-2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 61da9dd5-0e44-4195-a909-35e9b9704297, UID in object meta: 
I1109 04:16:53.069542  112476 pv_controller_base.go:204] could not sync volume "pv-w-canbind-2": Operation cannot be fulfilled on persistentvolumes "pv-w-canbind-2": StorageError: invalid object, Code: 4, Key: /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-w-canbind-2, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 61da9dd5-0e44-4195-a909-35e9b9704297, UID in object meta: 
I1109 04:16:53.069577  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-3" with version 35933
I1109 04:16:53.069609  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-3]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 (uid: 33366f29-05f7-4d13-8d71-e1af4dc91938)", boundByController: true
I1109 04:16:53.069623  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-3]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2
I1109 04:16:53.069644  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-3]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2 not found
I1109 04:16:53.069659  112476 pv_controller.go:1009] reclaimVolume[pv-w-canbind-3]: policy is Retain, nothing to do
I1109 04:16:53.069678  112476 pv_controller_base.go:216] volume "pv-w-canbind-2" deleted
I1109 04:16:53.069715  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-3" was already processed
I1109 04:16:53.071330  112476 pv_controller_base.go:216] volume "pv-w-canbind-3" deleted
I1109 04:16:53.071378  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-2" was already processed
I1109 04:16:53.073712  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (10.629394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.073909  112476 pv_controller_base.go:216] volume "pv-w-canbind-5" deleted
I1109 04:16:53.084990  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (10.681384ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.088733  112476 volume_binding_test.go:191] Running test mix immediate and wait
I1109 04:16:53.096591  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.960744ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.099655  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.058178ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.102849  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-w-canbind-4", version 35941
I1109 04:16:53.102899  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:53.102924  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1109 04:16:53.102932  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1109 04:16:53.104194  112476 httplog.go:90] POST /api/v1/persistentvolumes: (3.955033ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.106818  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (3.621173ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.107301  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 35942
I1109 04:16:53.107344  112476 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Available"
I1109 04:16:53.107634  112476 httplog.go:90] POST /api/v1/persistentvolumes: (2.790352ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.107793  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 35942
I1109 04:16:53.107827  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "", boundByController: false
I1109 04:16:53.107855  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-w-canbind-4]: volume is unused
I1109 04:16:53.107866  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Available
I1109 04:16:53.107875  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-4]: phase Available already set
I1109 04:16:53.107888  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind-2", version 35943
I1109 04:16:53.107900  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:53.107918  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1109 04:16:53.107924  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1109 04:16:53.110588  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4", version 35944
I1109 04:16:53.110627  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:53.110655  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: no volume found
I1109 04:16:53.110675  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4] status: set phase Pending
I1109 04:16:53.110687  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4] status: phase Pending already set
I1109 04:16:53.110739  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (2.525501ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.111054  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-w-canbind-4", UID:"ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1", APIVersion:"v1", ResourceVersion:"35944", FieldPath:""}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding
I1109 04:16:53.111366  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (3.129768ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.111589  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35945
I1109 04:16:53.111629  112476 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Available"
I1109 04:16:53.112260  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35945
I1109 04:16:53.112287  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "", boundByController: false
I1109 04:16:53.112303  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind-2]: volume is unused
I1109 04:16:53.112310  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Available
I1109 04:16:53.112317  112476 pv_controller.go:778] updating PersistentVolume[pv-i-canbind-2]: phase Available already set
I1109 04:16:53.113061  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (1.960614ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.113717  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (2.232783ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39808]
I1109 04:16:53.114036  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2", version 35947
I1109 04:16:53.114075  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:53.114111  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Available, bound to: "", boundByController: false
I1109 04:16:53.114123  112476 pv_controller.go:929] binding volume "pv-i-canbind-2" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.114146  112476 pv_controller.go:827] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.114171  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" bound to volume "pv-i-canbind-2"
I1109 04:16:53.116398  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (2.169247ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.116696  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound
I1109 04:16:53.116713  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound
E1109 04:16:53.116889  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:53.116892  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:53.116937  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:53.116964  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:53.118973  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2: (4.552249ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.119223  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35949
I1109 04:16:53.119256  112476 pv_controller.go:860] updating PersistentVolume[pv-i-canbind-2]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.119269  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1109 04:16:53.119621  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35949
I1109 04:16:53.119705  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 (uid: 0b0b8761-8a03-4cb5-99be-82cbbc992cc4)", boundByController: true
I1109 04:16:53.119717  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2
I1109 04:16:53.119732  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:53.119779  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:53.120580  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound/status: (3.364516ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
E1109 04:16:53.120947  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:53.120968  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.051049ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39812]
I1109 04:16:53.121018  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound
I1109 04:16:53.121029  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound
E1109 04:16:53.121220  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:53.121256  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:53.121278  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound to (PodScheduled==False, Reason=Unschedulable)
E1109 04:16:53.121296  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-mix-bound": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:53.121305  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.616031ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:53.123395  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.805423ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.123396  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35952
I1109 04:16:53.123478  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 (uid: 0b0b8761-8a03-4cb5-99be-82cbbc992cc4)", boundByController: true
I1109 04:16:53.123496  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2
I1109 04:16:53.123513  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:53.123527  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind-2]: volume not bound yet, waiting for syncClaim to fix it
E1109 04:16:53.123663  112476 factory.go:673] pod is already present in the backoffQ
I1109 04:16:53.123797  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (4.281792ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39270]
I1109 04:16:53.123818  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.211886ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39812]
I1109 04:16:53.124219  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35952
I1109 04:16:53.124242  112476 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Bound"
I1109 04:16:53.124253  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1109 04:16:53.124265  112476 pv_controller.go:899] volume "pv-i-canbind-2" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.126326  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind-2: (1.821902ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.126647  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" with version 35954
I1109 04:16:53.126738  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: bound to "pv-i-canbind-2"
I1109 04:16:53.126777  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2] status: set phase Bound
I1109 04:16:53.128835  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind-2/status: (1.782743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.129115  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" with version 35955
I1109 04:16:53.129163  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" entered phase "Bound"
I1109 04:16:53.129179  112476 pv_controller.go:955] volume "pv-i-canbind-2" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.129197  112476 pv_controller.go:956] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 (uid: 0b0b8761-8a03-4cb5-99be-82cbbc992cc4)", boundByController: true
I1109 04:16:53.129208  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1109 04:16:53.129240  112476 pv_controller_base.go:533] storeObjectUpdate: ignoring claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" version 35954
I1109 04:16:53.129438  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" with version 35955
I1109 04:16:53.129467  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1109 04:16:53.129486  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: volume "pv-i-canbind-2" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 (uid: 0b0b8761-8a03-4cb5-99be-82cbbc992cc4)", boundByController: true
I1109 04:16:53.129495  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: claim is already correctly bound
I1109 04:16:53.129504  112476 pv_controller.go:929] binding volume "pv-i-canbind-2" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.129512  112476 pv_controller.go:827] updating PersistentVolume[pv-i-canbind-2]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.129526  112476 pv_controller.go:839] updating PersistentVolume[pv-i-canbind-2]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.129533  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Bound
I1109 04:16:53.129539  112476 pv_controller.go:778] updating PersistentVolume[pv-i-canbind-2]: phase Bound already set
I1109 04:16:53.129546  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: binding to "pv-i-canbind-2"
I1109 04:16:53.129618  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2]: already bound to "pv-i-canbind-2"
I1109 04:16:53.129627  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2] status: set phase Bound
I1109 04:16:53.129645  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2] status: phase Bound already set
I1109 04:16:53.129657  112476 pv_controller.go:955] volume "pv-i-canbind-2" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2"
I1109 04:16:53.129682  112476 pv_controller.go:956] volume "pv-i-canbind-2" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 (uid: 0b0b8761-8a03-4cb5-99be-82cbbc992cc4)", boundByController: true
I1109 04:16:53.129692  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" status after binding: phase: Bound, bound to: "pv-i-canbind-2", bindCompleted: true, boundByController: true
I1109 04:16:53.219735  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.232566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.320299  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (3.100295ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.420110  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.829442ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.512516  112476 httplog.go:90] GET /api/v1/namespaces/default: (2.566842ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.514328  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.360875ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.515583  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (858.383µs) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.518109  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.033554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.619031  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.762977ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.719109  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.854674ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.818954  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.571677ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:53.919395  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.15322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.019278  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.025623ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.118965  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.767058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.218709  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.485515ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.318922  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.685065ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.419083  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.673047ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.518856  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.644839ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.587818  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound
I1109 04:16:54.587855  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound
I1109 04:16:54.588110  112476 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound" match with Node "node-1"
I1109 04:16:54.588160  112476 scheduler_binder.go:699] Found matching volumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound" on node "node-1"
I1109 04:16:54.588263  112476 scheduler_binder.go:653] PersistentVolume "pv-i-canbind-2", Node "node-2" mismatch for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound": No matching NodeSelectorTerms
I1109 04:16:54.588300  112476 scheduler_binder.go:686] No matching volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound", PVC "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" on node "node-2"
I1109 04:16:54.588313  112476 scheduler_binder.go:725] storage class "wait-xgcw" of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" does not support dynamic provisioning
I1109 04:16:54.588382  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound", node "node-1"
I1109 04:16:54.588449  112476 scheduler_assume_cache.go:323] Assumed v1.PersistentVolume "pv-w-canbind-4", version 35942
I1109 04:16:54.588540  112476 scheduler_binder.go:332] BindPodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound", node "node-1"
I1109 04:16:54.588556  112476 scheduler_binder.go:404] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" bound to volume "pv-w-canbind-4"
I1109 04:16:54.591443  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4: (2.402557ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.591706  112476 scheduler_binder.go:410] updating PersistentVolume[pv-w-canbind-4]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.591905  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36419
I1109 04:16:54.591944  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:54.591958  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4
I1109 04:16:54.591979  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:54.591994  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:54.592043  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" with version 35944
I1109 04:16:54.592058  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:54.592097  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:54.592109  112476 pv_controller.go:929] binding volume "pv-w-canbind-4" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.592121  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.592137  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.592146  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1109 04:16:54.594911  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (2.282299ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.595335  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36422
I1109 04:16:54.595366  112476 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Bound"
I1109 04:16:54.595385  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1109 04:16:54.595422  112476 pv_controller.go:899] volume "pv-w-canbind-4" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.595434  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36422
I1109 04:16:54.595466  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:54.595479  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4
I1109 04:16:54.595500  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:54.595514  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-w-canbind-4]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:16:54.597767  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-4: (2.065335ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.598030  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" with version 36423
I1109 04:16:54.598064  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: bound to "pv-w-canbind-4"
I1109 04:16:54.598075  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4] status: set phase Bound
I1109 04:16:54.607702  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-4/status: (9.368069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.610027  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" with version 36425
I1109 04:16:54.610070  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" entered phase "Bound"
I1109 04:16:54.610091  112476 pv_controller.go:955] volume "pv-w-canbind-4" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.610120  112476 pv_controller.go:956] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:54.610141  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1109 04:16:54.610178  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" with version 36425
I1109 04:16:54.610195  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1109 04:16:54.610214  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: volume "pv-w-canbind-4" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:54.610224  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: claim is already correctly bound
I1109 04:16:54.610234  112476 pv_controller.go:929] binding volume "pv-w-canbind-4" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.610244  112476 pv_controller.go:827] updating PersistentVolume[pv-w-canbind-4]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.610266  112476 pv_controller.go:839] updating PersistentVolume[pv-w-canbind-4]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.610280  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Bound
I1109 04:16:54.610288  112476 pv_controller.go:778] updating PersistentVolume[pv-w-canbind-4]: phase Bound already set
I1109 04:16:54.610299  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: binding to "pv-w-canbind-4"
I1109 04:16:54.610316  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4]: already bound to "pv-w-canbind-4"
I1109 04:16:54.610326  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4] status: set phase Bound
I1109 04:16:54.610344  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4] status: phase Bound already set
I1109 04:16:54.610358  112476 pv_controller.go:955] volume "pv-w-canbind-4" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4"
I1109 04:16:54.610394  112476 pv_controller.go:956] volume "pv-w-canbind-4" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:54.610441  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" status after binding: phase: Bound, bound to: "pv-w-canbind-4", bindCompleted: true, boundByController: true
I1109 04:16:54.619811  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.531304ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.719867  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.568201ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.819356  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.978803ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:54.918862  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.667179ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.022044  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (4.832739ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.122280  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.330353ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.222605  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (5.401364ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.320525  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.453683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.422497  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (5.236291ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.519125  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (1.912323ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.587951  112476 cache.go:656] Couldn't expire cache for pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound. Binding is still in progress.
I1109 04:16:55.592023  112476 scheduler_binder.go:553] All PVCs for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound" are bound
I1109 04:16:55.592107  112476 factory.go:698] Attempting to bind pod-mix-bound to node-1
I1109 04:16:55.594770  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound/binding: (2.242843ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.596020  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-mix-bound is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:16:55.598780  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.416705ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.619347  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-mix-bound: (2.109997ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.624257  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-4: (4.295823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.626479  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind-2: (1.712414ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.628217  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-w-canbind-4: (1.311154ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.629906  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind-2: (1.232397ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.647692  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (17.363245ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.655577  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" deleted
I1109 04:16:55.655719  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 35952
I1109 04:16:55.655769  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind-2]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 (uid: 0b0b8761-8a03-4cb5-99be-82cbbc992cc4)", boundByController: true
I1109 04:16:55.655786  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind-2]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2
I1109 04:16:55.659184  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind-2: (3.096609ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:55.659544  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind-2]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2 not found
I1109 04:16:55.659576  112476 pv_controller.go:573] volume "pv-i-canbind-2" is released and reclaim policy "Retain" will be executed
I1109 04:16:55.659592  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind-2]: set phase Released
I1109 04:16:55.661998  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (13.055928ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.662877  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" deleted
I1109 04:16:55.666354  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind-2/status: (6.430218ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:55.666661  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind-2" with version 36727
I1109 04:16:55.666698  112476 pv_controller.go:796] volume "pv-i-canbind-2" entered phase "Released"
I1109 04:16:55.666711  112476 pv_controller.go:1009] reclaimVolume[pv-i-canbind-2]: policy is Retain, nothing to do
I1109 04:16:55.666737  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36422
I1109 04:16:55.666761  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:55.666773  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4
I1109 04:16:55.669001  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-w-canbind-4: (1.959715ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:55.669251  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 not found
I1109 04:16:55.669282  112476 pv_controller.go:573] volume "pv-w-canbind-4" is released and reclaim policy "Retain" will be executed
I1109 04:16:55.669295  112476 pv_controller.go:775] updating PersistentVolume[pv-w-canbind-4]: set phase Released
I1109 04:16:55.673479  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-w-canbind-4/status: (3.895653ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:55.673753  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36730
I1109 04:16:55.673791  112476 pv_controller.go:796] volume "pv-w-canbind-4" entered phase "Released"
I1109 04:16:55.673808  112476 pv_controller.go:1009] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1109 04:16:55.673839  112476 pv_controller_base.go:216] volume "pv-i-canbind-2" deleted
I1109 04:16:55.673871  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-w-canbind-4" with version 36730
I1109 04:16:55.673903  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-w-canbind-4]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 (uid: ed0f7fec-00b1-4cb9-98b3-16b279c4e1f1)", boundByController: true
I1109 04:16:55.673916  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-w-canbind-4]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4
I1109 04:16:55.673942  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-w-canbind-4]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4 not found
I1109 04:16:55.673948  112476 pv_controller.go:1009] reclaimVolume[pv-w-canbind-4]: policy is Retain, nothing to do
I1109 04:16:55.673971  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind-2" was already processed
I1109 04:16:55.677616  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (15.009951ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.677868  112476 pv_controller_base.go:216] volume "pv-w-canbind-4" deleted
I1109 04:16:55.678007  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-w-canbind-4" was already processed
I1109 04:16:55.688122  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.638748ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.688445  112476 volume_binding_test.go:191] Running test immediate can bind
I1109 04:16:55.691557  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (2.804953ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.696022  112476 httplog.go:90] POST /apis/storage.k8s.io/v1/storageclasses: (3.494591ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.698589  112476 pv_controller_base.go:509] storeObjectUpdate: adding volume "pv-i-canbind", version 36743
I1109 04:16:55.698634  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Pending, bound to: "", boundByController: false
I1109 04:16:55.698657  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1109 04:16:55.698668  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1109 04:16:55.699107  112476 httplog.go:90] POST /api/v1/persistentvolumes: (2.557802ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.701547  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (1.990726ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.701794  112476 pv_controller_base.go:509] storeObjectUpdate: adding claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind", version 36745
I1109 04:16:55.701827  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:16:55.701857  112476 pv_controller.go:301] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: no volume found
I1109 04:16:55.701866  112476 pv_controller.go:1324] provisionClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: started
E1109 04:16:55.701896  112476 pv_controller.go:1329] error finding provisioning plugin for claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind: no volume plugin matched
I1109 04:16:55.701966  112476 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766", Name:"pvc-i-canbind", UID:"b81258ef-09ba-4e47-b0ef-161e75701690", APIVersion:"v1", ResourceVersion:"36745", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched
I1109 04:16:55.702910  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (3.993144ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:55.704765  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 36744
I1109 04:16:55.704793  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (2.493725ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.704808  112476 pv_controller.go:796] volume "pv-i-canbind" entered phase "Available"
I1109 04:16:55.704835  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 36744
I1109 04:16:55.704851  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1109 04:16:55.704871  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1109 04:16:55.704878  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1109 04:16:55.704887  112476 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1109 04:16:55.706544  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (3.434673ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40632]
I1109 04:16:55.706960  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind
I1109 04:16:55.706978  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind
E1109 04:16:55.707139  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:16:55.707167  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:16:55.707186  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
I1109 04:16:55.710041  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.162613ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:16:55.710785  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind/status: (3.348676ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
E1109 04:16:55.711363  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1109 04:16:55.711795  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.854998ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40634]
I1109 04:16:55.811070  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.183808ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:55.910103  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.85002ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.009285  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.941471ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.109982  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.442341ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.209788  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.366242ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.309269  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.803608ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.409505  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.023287ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.509251  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.893075ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.609219  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.888374ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.709228  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.843764ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.809441  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.894838ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:56.909432  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.99966ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.009230  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.871308ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.109197  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.850254ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.213508  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (6.046315ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.310511  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.100048ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.408866  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.507125ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.509158  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.778743ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.608958  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.664035ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.709031  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.659153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.808886  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.555919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:57.909214  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.786332ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.009394  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.89277ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.109261  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.867652ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.210494  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.153661ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.313235  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (5.753159ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.410097  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.685614ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.510831  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.40908ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.609393  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.006444ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.709299  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.921415ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.809209  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.820903ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:58.909147  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.826293ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.009517  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.160534ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.109193  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.871183ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.212138  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (4.622822ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.309166  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.703067ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.409851  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.351069ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.511171  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.608757ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.609521  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.060158ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.709381  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.912712ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.810921  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.464773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:16:59.909015  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.569547ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.009039  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.716467ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.109289  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.917828ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.209579  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.056301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.309167  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.820656ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.325022  112476 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.543061ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.330299  112476 httplog.go:90] GET /api/v1/namespaces/kube-public: (4.747445ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.333491  112476 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (2.478109ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.409261  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.910313ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.509039  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.627433ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.609083  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.749606ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.709726  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.126861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.809818  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.414955ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:00.908982  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.552312ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.009666  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.257645ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.109969  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.587655ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.215475  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (8.084853ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.309667  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.196872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.409799  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.235447ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.509600  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.129786ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.609611  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.249686ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.709655  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.275761ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.809561  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.043869ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:01.909813  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.211264ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.009693  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.123331ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.109615  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.229622ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.209180  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.711919ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.309073  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.708408ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.409761  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.301587ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.510561  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.063861ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.609491  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.170394ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.708934  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.649981ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.809126  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.762805ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:02.910008  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.677102ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.009543  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.153723ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.109033  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.747714ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.209118  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.802152ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.309194  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.822301ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.408895  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.564891ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.509063  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.703322ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.512750  112476 httplog.go:90] GET /api/v1/namespaces/default: (2.001418ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.516834  112476 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.670821ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.518889  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.606187ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.609995  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.295172ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.709489  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.076822ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.784946  112476 pv_controller_base.go:426] resyncing PV controller
I1109 04:17:03.785070  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 36744
I1109 04:17:03.785103  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" with version 36745
I1109 04:17:03.785115  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "", boundByController: false
I1109 04:17:03.785137  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:17:03.785147  112476 pv_controller.go:492] synchronizing PersistentVolume[pv-i-canbind]: volume is unused
I1109 04:17:03.785157  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Available
I1109 04:17:03.785166  112476 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Available already set
I1109 04:17:03.785174  112476 pv_controller.go:326] synchronizing unbound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Available, bound to: "", boundByController: false
I1109 04:17:03.785184  112476 pv_controller.go:929] binding volume "pv-i-canbind" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.785191  112476 pv_controller.go:827] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.785222  112476 pv_controller.go:847] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" bound to volume "pv-i-canbind"
I1109 04:17:03.789054  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind: (3.405882ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.789497  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38153
I1109 04:17:03.789534  112476 pv_controller.go:860] updating PersistentVolume[pv-i-canbind]: bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.789547  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1109 04:17:03.789897  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind
I1109 04:17:03.789928  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind
I1109 04:17:03.789897  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38153
I1109 04:17:03.790051  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Available, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:03.790061  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind
I1109 04:17:03.790082  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:17:03.790097  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
E1109 04:17:03.790135  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:17:03.790154  112476 framework.go:350] error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
E1109 04:17:03.790190  112476 factory.go:648] Error scheduling volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims; retrying
I1109 04:17:03.790218  112476 scheduler.go:774] Updating pod condition for volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind to (PodScheduled==False, Reason=Unschedulable)
E1109 04:17:03.790241  112476 scheduler.go:643] error selecting node for pod: error while running "VolumeBinding" filter plugin for pod "pod-i-canbind": pod has unbound immediate PersistentVolumeClaims
I1109 04:17:03.792271  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.484872ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.792346  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38154
I1109 04:17:03.792381  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:03.792502  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind
I1109 04:17:03.792638  112476 pv_controller.go:553] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind found: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I1109 04:17:03.792697  112476 pv_controller.go:601] synchronizing PersistentVolume[pv-i-canbind]: volume not bound yet, waiting for syncClaim to fix it
I1109 04:17:03.792900  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38154
I1109 04:17:03.792932  112476 pv_controller.go:796] volume "pv-i-canbind" entered phase "Bound"
I1109 04:17:03.792948  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: binding to "pv-i-canbind"
I1109 04:17:03.792966  112476 pv_controller.go:899] volume "pv-i-canbind" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.793686  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.247843ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:17:03.794665  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (3.894879ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:03.797070  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind: (2.273566ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39552]
I1109 04:17:03.798104  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" with version 38156
I1109 04:17:03.798144  112476 pv_controller.go:910] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: bound to "pv-i-canbind"
I1109 04:17:03.798156  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind] status: set phase Bound
I1109 04:17:03.800583  112476 httplog.go:90] PUT /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind/status: (2.186228ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:03.800831  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" with version 38157
I1109 04:17:03.800866  112476 pv_controller.go:740] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" entered phase "Bound"
I1109 04:17:03.800886  112476 pv_controller.go:955] volume "pv-i-canbind" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.800919  112476 pv_controller.go:956] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:03.800936  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1109 04:17:03.800974  112476 pv_controller_base.go:537] storeObjectUpdate updating claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" with version 38157
I1109 04:17:03.800989  112476 pv_controller.go:239] synchronizing PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1109 04:17:03.801008  112476 pv_controller.go:447] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: volume "pv-i-canbind" found: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:03.801019  112476 pv_controller.go:464] synchronizing bound PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: claim is already correctly bound
I1109 04:17:03.801032  112476 pv_controller.go:929] binding volume "pv-i-canbind" to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.801045  112476 pv_controller.go:827] updating PersistentVolume[pv-i-canbind]: binding to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.801065  112476 pv_controller.go:839] updating PersistentVolume[pv-i-canbind]: already bound to "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.801079  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Bound
I1109 04:17:03.801090  112476 pv_controller.go:778] updating PersistentVolume[pv-i-canbind]: phase Bound already set
I1109 04:17:03.801101  112476 pv_controller.go:867] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: binding to "pv-i-canbind"
I1109 04:17:03.801119  112476 pv_controller.go:914] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind]: already bound to "pv-i-canbind"
I1109 04:17:03.801129  112476 pv_controller.go:681] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind] status: set phase Bound
I1109 04:17:03.801152  112476 pv_controller.go:726] updating PersistentVolumeClaim[volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind] status: phase Bound already set
I1109 04:17:03.801170  112476 pv_controller.go:955] volume "pv-i-canbind" bound to claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind"
I1109 04:17:03.801187  112476 pv_controller.go:956] volume "pv-i-canbind" status after binding: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:03.801202  112476 pv_controller.go:957] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" status after binding: phase: Bound, bound to: "pv-i-canbind", bindCompleted: true, boundByController: true
I1109 04:17:03.811073  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.785634ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:03.909402  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.990084ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.009669  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.254037ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.109686  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.1611ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.209313  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.971359ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.309331  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.968554ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.409284  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.923507ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.509318  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.862773ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.609255  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.797306ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.709172  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.789153ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.809310  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.910825ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:04.908947  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.626424ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.009546  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.162514ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.109433  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.032823ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.209120  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.715214ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.311025  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.643873ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.410680  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.289352ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.511580  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (4.207943ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.613721  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (6.257934ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.709970  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.506866ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.809622  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.234192ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:05.909122  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.738491ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.009381  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.063111ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.109555  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.152683ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.209582  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (2.055982ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.309266  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.879717ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.410739  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (3.349505ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.509338  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (1.924151ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.589972  112476 scheduling_queue.go:841] About to try and schedule pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind
I1109 04:17:06.590012  112476 scheduler.go:611] Attempting to schedule pod: volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind
I1109 04:17:06.590282  112476 scheduler_binder.go:653] PersistentVolume "pv-i-canbind", Node "node-2" mismatch for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind": No matching NodeSelectorTerms
I1109 04:17:06.590283  112476 scheduler_binder.go:659] All bound volumes for Pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind" match with Node "node-1"
I1109 04:17:06.590434  112476 scheduler_binder.go:257] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind", node "node-1"
I1109 04:17:06.590462  112476 scheduler_binder.go:267] AssumePodVolumes for pod "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind", node "node-1": all PVCs bound and nothing to do
I1109 04:17:06.590548  112476 factory.go:698] Attempting to bind pod-i-canbind to node-1
I1109 04:17:06.593362  112476 httplog.go:90] POST /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind/binding: (2.414899ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.593713  112476 scheduler.go:756] pod volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-i-canbind is bound successfully on node "node-1", 2 nodes evaluated, 1 nodes were found feasible.
I1109 04:17:06.598572  112476 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/events: (4.381195ms) 201 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.612493  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods/pod-i-canbind: (4.557058ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.615464  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind: (1.683806ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.618013  112476 httplog.go:90] GET /api/v1/persistentvolumes/pv-i-canbind: (2.051727ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.628868  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (10.31405ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.635519  112476 pv_controller_base.go:265] claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" deleted
I1109 04:17:06.635578  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38154
I1109 04:17:06.635616  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Bound, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:06.635632  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind
I1109 04:17:06.636166  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (6.772571ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.639729  112476 httplog.go:90] GET /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims/pvc-i-canbind: (3.782837ms) 404 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:17:06.640073  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind not found
I1109 04:17:06.640103  112476 pv_controller.go:573] volume "pv-i-canbind" is released and reclaim policy "Retain" will be executed
I1109 04:17:06.640118  112476 pv_controller.go:775] updating PersistentVolume[pv-i-canbind]: set phase Released
I1109 04:17:06.642652  112476 httplog.go:90] PUT /api/v1/persistentvolumes/pv-i-canbind/status: (2.143487ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:39810]
I1109 04:17:06.642866  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38843
I1109 04:17:06.642887  112476 pv_controller.go:796] volume "pv-i-canbind" entered phase "Released"
I1109 04:17:06.642898  112476 pv_controller.go:1009] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1109 04:17:06.643712  112476 pv_controller_base.go:537] storeObjectUpdate updating volume "pv-i-canbind" with version 38843
I1109 04:17:06.643762  112476 pv_controller.go:487] synchronizing PersistentVolume[pv-i-canbind]: phase: Released, bound to: "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind (uid: b81258ef-09ba-4e47-b0ef-161e75701690)", boundByController: true
I1109 04:17:06.643775  112476 pv_controller.go:512] synchronizing PersistentVolume[pv-i-canbind]: volume is bound to claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind
I1109 04:17:06.643795  112476 pv_controller.go:545] synchronizing PersistentVolume[pv-i-canbind]: claim volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind not found
I1109 04:17:06.643803  112476 pv_controller.go:1009] reclaimVolume[pv-i-canbind]: policy is Retain, nothing to do
I1109 04:17:06.644600  112476 store.go:231] deletion of /87c6aefe-e175-476b-9a34-2f22dccf8ed1/persistentvolumes/pv-i-canbind failed because of a conflict, going to retry
I1109 04:17:06.646306  112476 pv_controller_base.go:216] volume "pv-i-canbind" deleted
I1109 04:17:06.646353  112476 pv_controller_base.go:403] deletion of claim "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pvc-i-canbind" was already processed
I1109 04:17:06.646725  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (9.476028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.656788  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (9.580316ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.657575  112476 volume_binding_test.go:920] test cluster "volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766" start to tear down
I1109 04:17:06.659970  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pods: (2.137423ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.664041  112476 httplog.go:90] DELETE /api/v1/namespaces/volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/persistentvolumeclaims: (3.616834ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.673206  112476 httplog.go:90] DELETE /api/v1/persistentvolumes: (8.165824ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.677710  112476 httplog.go:90] DELETE /apis/storage.k8s.io/v1/storageclasses: (2.859196ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.678595  112476 pv_controller_base.go:305] Shutting down persistent volume controller
I1109 04:17:06.678616  112476 pv_controller_base.go:416] claim worker queue shutting down
I1109 04:17:06.678717  112476 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30887&timeout=9m32s&timeoutSeconds=572&watch=true: (1m3.081113579s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55486]
I1109 04:17:06.678817  112476 pv_controller_base.go:359] volume worker queue shutting down
I1109 04:17:06.678947  112476 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=31360&timeout=9m16s&timeoutSeconds=556&watch=true: (1m3.087818249s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55472]
I1109 04:17:06.679047  112476 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30887&timeout=7m28s&timeoutSeconds=448&watch=true: (1m3.092272312s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54966]
I1109 04:17:06.679193  112476 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30885&timeout=7m29s&timeoutSeconds=449&watch=true: (1m3.073774593s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55490]
I1109 04:17:06.679220  112476 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=30884&timeout=9m52s&timeoutSeconds=592&watch=true: (1m3.073311998s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55496]
I1109 04:17:06.679253  112476 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30884&timeout=6m22s&timeoutSeconds=382&watch=true: (1m3.084298397s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54980]
I1109 04:17:06.679266  112476 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30884&timeout=7m45s&timeoutSeconds=465&watch=true: (1m2.99664943s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55528]
I1109 04:17:06.679375  112476 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30889&timeout=5m13s&timeoutSeconds=313&watch=true: (1m3.070301219s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55500]
I1109 04:17:06.679443  112476 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=30884&timeout=7m32s&timeoutSeconds=452&watch=true: (1m2.995537251s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55534]
I1109 04:17:06.679512  112476 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30888&timeout=7m15s&timeoutSeconds=435&watch=true: (1m3.072641613s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55498]
I1109 04:17:06.679574  112476 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30886&timeout=9m43s&timeoutSeconds=583&watch=true: (1m3.080650027s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55488]
I1109 04:17:06.679625  112476 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30884&timeout=7m24s&timeoutSeconds=444&watch=true: (1m2.997194045s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55522]
I1109 04:17:06.679684  112476 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30884&timeout=6m26s&timeoutSeconds=386&watch=true: (1m3.074321714s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55492]
I1109 04:17:06.679716  112476 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30887&timeout=9m45s&timeoutSeconds=585&watch=true: (1m2.997024127s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55520]
I1109 04:17:06.679838  112476 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30884&timeout=8m42s&timeoutSeconds=522&watch=true: (1m2.995274932s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55530]
I1109 04:17:06.679874  112476 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30884&timeout=7m2s&timeoutSeconds=422&watch=true: (1m3.084036724s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55482]
I1109 04:17:06.695630  112476 httplog.go:90] DELETE /api/v1/nodes: (17.464464ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.695914  112476 controller.go:180] Shutting down kubernetes service endpoint reconciler
I1109 04:17:06.697551  112476 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.337028ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.700457  112476 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.210778ms) 200 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43028]
I1109 04:17:06.700909  112476 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1109 04:17:06.701121  112476 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=30884&timeout=5m47s&timeoutSeconds=347&watch=true: (1m6.430456005s) 0 [volumescheduling.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54968]
--- FAIL: TestVolumeBinding (66.70s)
    volume_binding_test.go:243: Failed to schedule Pod "pod-w-pvc-prebound": timed out waiting for the condition

				from junit_99844db6e586a0ff1ded59c41b65ce7fe8e8a77e_20191109-040846.xml

Find volume-scheduling-8e7149e3-72ce-4a97-a910-4972af698766/pod-w-canbind mentions in log files | View test history on testgrid


Show 2898 Passed Tests