This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 2 failed / 2861 succeeded
Started2019-09-11 15:46
Elapsed28m15s
Revision
Buildergke-prow-ssd-pool-1a225945-z2ft
Refs master:2b7ceb21
82053:07ba65df
82060:45d6f088
82064:6c46135f
82113:8dc401d1
82121:aa20910e
82161:4558dd40
82170:6392b69a
82175:9828f986
82187:7d4bb382
82193:f1b314bf
82209:89a70fa4
82210:270ddcea
82222:d48e47a9
82224:6b961eb0
82233:6d6b0be3
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/7e4a6568-86dd-4b01-b8a0-f05a5a4795d7/targets/test'}}
pod44257493-d4ab-11e9-affb-2e2498aeb6d1
resultstorehttps://source.cloud.google.com/results/invocations/7e4a6568-86dd-4b01-b8a0-f05a5a4795d7/targets/test
infra-commit943aad7c5
pod44257493-d4ab-11e9-affb-2e2498aeb6d1
repok8s.io/kubernetes
repo-commit9d0a41677acc10ab6e9ae534b828e5476e1a87fd
repos{u'k8s.io/kubernetes': u'master:2b7ceb215a206f2047ed3199dc2bff4e306581ac,82053:07ba65df6d69cf951063f08e52eafe218be21481,82060:45d6f08868ce2729182aae5734a00e5e27ae08f9,82064:6c46135ff5647b97aa6e38023b9e749a448d6536,82113:8dc401d14161e757540ab46e40ed443d0300cdb5,82121:aa20910e242f5b708d337a6fbebda5d9e35b88b5,82161:4558dd407ac8e692dc41c47268e15ff692949ff2,82170:6392b69a1d010f1c1453fc1a3b346e3ff2d708b2,82175:9828f986afd4db79a10c78bee1cc2e449faee3a6,82187:7d4bb382474893c5c3ed6e375b5ff4a99862eca0,82193:f1b314bf5a678e061fa748bc8c9497f40fad0784,82209:89a70fa407b10329e5e71de35d94616e8d444b2d,82210:270ddcea236d99cc8098c216199112913e1c11d4,82222:d48e47a95efecb9cdedce0877d8f7519c489775c,82224:6b961eb08cbcd06746835132767af4fa29fe5e39,82233:6d6b0be36bfcd8faa2d4033d34feb71c341286cc'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestBindPlugin 1m5s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestBindPlugin$
=== RUN   TestBindPlugin
W0911 16:09:28.690683  108813 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0911 16:09:28.690709  108813 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0911 16:09:28.690724  108813 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0911 16:09:28.690735  108813 master.go:259] Using reconciler: 
I0911 16:09:28.692735  108813 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.692903  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.693008  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.694036  108813 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0911 16:09:28.694090  108813 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.694116  108813 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0911 16:09:28.694409  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.694435  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.695402  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.696216  108813 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 16:09:28.696257  108813 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.696270  108813 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 16:09:28.696404  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.696422  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.697515  108813 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0911 16:09:28.697541  108813 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0911 16:09:28.697559  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.697549  108813 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.697718  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.697730  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.698918  108813 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0911 16:09:28.699023  108813 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0911 16:09:28.699168  108813 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.699315  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.699337  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.699900  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.700098  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.701104  108813 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0911 16:09:28.701349  108813 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0911 16:09:28.701352  108813 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.701523  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.701548  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.702373  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.702820  108813 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0911 16:09:28.702934  108813 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0911 16:09:28.703004  108813 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.703149  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.703172  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.704147  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.704439  108813 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0911 16:09:28.704478  108813 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0911 16:09:28.704659  108813 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.704787  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.704809  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.705616  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.705880  108813 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0911 16:09:28.706077  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.706159  108813 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0911 16:09:28.706195  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.706210  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.707025  108813 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0911 16:09:28.707194  108813 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.707404  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.707439  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.707525  108813 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0911 16:09:28.707539  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.708994  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.709014  108813 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0911 16:09:28.709934  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.709036  108813 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0911 16:09:28.710744  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.712071  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.711916  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.713495  108813 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0911 16:09:28.713988  108813 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0911 16:09:28.716374  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.718498  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.718820  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.720406  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.721391  108813 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0911 16:09:28.721567  108813 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0911 16:09:28.721611  108813 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.721808  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.721829  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.722904  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.723700  108813 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0911 16:09:28.723802  108813 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0911 16:09:28.723924  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.724269  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.724290  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.725771  108813 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0911 16:09:28.725796  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.725802  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.725942  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.725959  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.726066  108813 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0911 16:09:28.727464  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.727483  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.727506  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.728388  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.728551  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.728570  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.729456  108813 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0911 16:09:28.729477  108813 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0911 16:09:28.729895  108813 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0911 16:09:28.729956  108813 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.730161  108813 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.731113  108813 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.732258  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.732642  108813 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.733886  108813 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.734710  108813 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.735143  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.735253  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.735487  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.736068  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.738002  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.738458  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.739483  108813 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.740025  108813 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.740860  108813 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.741262  108813 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.742151  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.742538  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.742903  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.743378  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.743795  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.744103  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.744521  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.745798  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.746323  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.747823  108813 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.748823  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.749343  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.749779  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.750746  108813 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.751196  108813 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.752205  108813 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.753162  108813 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.753909  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.754895  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.755347  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.755558  108813 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0911 16:09:28.755657  108813 master.go:461] Enabling API group "authentication.k8s.io".
I0911 16:09:28.755728  108813 master.go:461] Enabling API group "authorization.k8s.io".
I0911 16:09:28.755955  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.756233  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.756363  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.757454  108813 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 16:09:28.757706  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.757922  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.758009  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.757507  108813 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 16:09:28.759342  108813 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 16:09:28.759472  108813 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 16:09:28.759575  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.759811  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.759834  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.759977  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.760685  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.760783  108813 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 16:09:28.760803  108813 master.go:461] Enabling API group "autoscaling".
I0911 16:09:28.760825  108813 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 16:09:28.760998  108813 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.761155  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.761181  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.762201  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.762476  108813 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0911 16:09:28.762615  108813 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0911 16:09:28.762727  108813 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.762873  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.762893  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.763354  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.763548  108813 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0911 16:09:28.763589  108813 master.go:461] Enabling API group "batch".
I0911 16:09:28.763625  108813 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0911 16:09:28.774526  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.781808  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.783972  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.784059  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.787602  108813 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0911 16:09:28.787920  108813 master.go:461] Enabling API group "certificates.k8s.io".
I0911 16:09:28.788953  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.789050  108813 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0911 16:09:28.791968  108813 watch_cache.go:405] Replace watchCache (rev: 28088) 
I0911 16:09:28.803566  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.803618  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.807047  108813 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 16:09:28.809544  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.807475  108813 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 16:09:28.816734  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.817133  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.821475  108813 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 16:09:28.825011  108813 master.go:461] Enabling API group "coordination.k8s.io".
I0911 16:09:28.822204  108813 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 16:09:28.822447  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.825179  108813 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0911 16:09:28.826152  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.826549  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.826697  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.826843  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.841174  108813 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 16:09:28.841236  108813 master.go:461] Enabling API group "extensions".
I0911 16:09:28.841580  108813 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.841800  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.841842  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.842040  108813 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 16:09:28.843984  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.844031  108813 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0911 16:09:28.844070  108813 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0911 16:09:28.844815  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.846909  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.847205  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.847244  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.848459  108813 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 16:09:28.848632  108813 master.go:461] Enabling API group "networking.k8s.io".
I0911 16:09:28.848521  108813 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 16:09:28.849044  108813 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.849320  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.849427  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.850469  108813 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0911 16:09:28.850707  108813 master.go:461] Enabling API group "node.k8s.io".
I0911 16:09:28.850663  108813 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0911 16:09:28.850952  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.851362  108813 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.851965  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.856787  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.852603  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.858934  108813 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0911 16:09:28.858965  108813 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0911 16:09:28.860082  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.860677  108813 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.861235  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.861413  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.862638  108813 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0911 16:09:28.862768  108813 master.go:461] Enabling API group "policy".
I0911 16:09:28.862884  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.863227  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.863377  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.863558  108813 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0911 16:09:28.864594  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.864835  108813 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 16:09:28.864981  108813 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 16:09:28.865029  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.865486  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.865635  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.866888  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.867832  108813 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 16:09:28.867865  108813 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 16:09:28.868322  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.869210  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.869661  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.869808  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.870539  108813 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 16:09:28.870652  108813 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 16:09:28.870780  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.871084  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.871137  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.871709  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.872106  108813 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 16:09:28.872257  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.872399  108813 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 16:09:28.872693  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.873164  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.873514  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.874153  108813 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 16:09:28.874246  108813 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 16:09:28.874363  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.874575  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.874609  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.875029  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.875936  108813 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 16:09:28.875965  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.876110  108813 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 16:09:28.876126  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.876251  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.877243  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.877290  108813 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 16:09:28.877337  108813 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 16:09:28.877585  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.878075  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.878125  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.879132  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.879148  108813 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 16:09:28.879177  108813 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0911 16:09:28.879205  108813 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 16:09:28.880206  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.882239  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.882478  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.882512  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.883235  108813 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 16:09:28.883358  108813 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 16:09:28.883425  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.883640  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.883662  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.884725  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.885048  108813 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 16:09:28.885070  108813 master.go:461] Enabling API group "scheduling.k8s.io".
I0911 16:09:28.885194  108813 master.go:450] Skipping disabled API group "settings.k8s.io".
I0911 16:09:28.885341  108813 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 16:09:28.885405  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.885622  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.885646  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.886078  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.886231  108813 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 16:09:28.886337  108813 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 16:09:28.886397  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.886597  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.886617  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.887231  108813 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 16:09:28.887398  108813 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.887711  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.887823  108813 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 16:09:28.887835  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.888706  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.889517  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.889675  108813 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0911 16:09:28.889710  108813 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.889917  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.889935  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.889950  108813 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0911 16:09:28.890789  108813 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0911 16:09:28.891122  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.891353  108813 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0911 16:09:28.891477  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.892415  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.892438  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.893044  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.893472  108813 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 16:09:28.893571  108813 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 16:09:28.893632  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.893828  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.893846  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.895596  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.896475  108813 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 16:09:28.896504  108813 master.go:461] Enabling API group "storage.k8s.io".
I0911 16:09:28.896531  108813 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 16:09:28.896670  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.896956  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.896980  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.897370  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.898560  108813 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0911 16:09:28.898843  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.899068  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.899169  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.899369  108813 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0911 16:09:28.900523  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.900628  108813 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0911 16:09:28.900789  108813 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.900816  108813 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0911 16:09:28.900921  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.900939  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.902328  108813 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0911 16:09:28.902451  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.902483  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.902615  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.902636  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.902713  108813 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0911 16:09:28.904038  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.905281  108813 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0911 16:09:28.905618  108813 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.905751  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.905774  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.905852  108813 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0911 16:09:28.906959  108813 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0911 16:09:28.907079  108813 master.go:461] Enabling API group "apps".
I0911 16:09:28.907192  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.907110  108813 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0911 16:09:28.908289  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.908462  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.908469  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.909704  108813 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 16:09:28.909739  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.909878  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.909898  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.909926  108813 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 16:09:28.910583  108813 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 16:09:28.910622  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.910718  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.910734  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.910796  108813 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 16:09:28.910973  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.912149  108813 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 16:09:28.912176  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.912203  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.912228  108813 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 16:09:28.912343  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.912362  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.912951  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.913675  108813 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 16:09:28.913841  108813 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0911 16:09:28.913882  108813 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 16:09:28.913881  108813 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.914288  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:28.914356  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:28.914949  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.915107  108813 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 16:09:28.915132  108813 master.go:461] Enabling API group "events.k8s.io".
I0911 16:09:28.915164  108813 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 16:09:28.915223  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.916938  108813 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.917175  108813 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.917471  108813 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.917580  108813 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.917685  108813 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.917775  108813 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.917949  108813 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.918050  108813 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.918135  108813 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.918224  108813 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.919674  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.919990  108813 watch_cache.go:405] Replace watchCache (rev: 28089) 
I0911 16:09:28.920344  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.921235  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.921676  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.922686  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.923015  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.923753  108813 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.924065  108813 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.924792  108813 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.925103  108813 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:09:28.925162  108813 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0911 16:09:28.925770  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.925911  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.926104  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.926816  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.927615  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.928499  108813 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.928855  108813 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.929739  108813 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.930542  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.930908  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.931706  108813 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:09:28.931870  108813 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0911 16:09:28.932752  108813 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.933171  108813 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.933801  108813 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.934381  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.934790  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.935432  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.935929  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.936442  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.936860  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.937425  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.937970  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:09:28.938108  108813 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0911 16:09:28.938599  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.939076  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:09:28.939205  108813 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0911 16:09:28.939696  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.940271  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.940573  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.941023  108813 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.941613  108813 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.942058  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.942608  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:09:28.942812  108813 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0911 16:09:28.943739  108813 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.944565  108813 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.944910  108813 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.945652  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.945980  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.946285  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.947010  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.947389  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.947741  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.948607  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.948953  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.949308  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:09:28.949467  108813 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0911 16:09:28.949546  108813 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0911 16:09:28.950228  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.950968  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.951696  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.952351  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.953263  108813 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3cac0a56-5246-472b-a64f-147c9689bc84", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:09:28.956655  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:28.956689  108813 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0911 16:09:28.956700  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:28.956711  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:28.956720  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:28.956728  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:28.956757  108813 httplog.go:90] GET /healthz: (220.669µs) 0 [Go-http-client/1.1 127.0.0.1:45010]
I0911 16:09:28.958154  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.413749ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.961769  108813 httplog.go:90] GET /api/v1/services: (1.181291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.966235  108813 httplog.go:90] GET /api/v1/services: (1.655241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.969998  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.121041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.971712  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:28.972752  108813 httplog.go:90] GET /api/v1/services: (1.973649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.972845  108813 httplog.go:90] POST /api/v1/namespaces: (1.989237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I0911 16:09:28.972939  108813 httplog.go:90] GET /api/v1/services: (2.757954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45010]
I0911 16:09:28.974935  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.525213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45010]
I0911 16:09:28.976079  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:28.976159  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:28.976191  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:28.976220  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:28.976961  108813 httplog.go:90] POST /api/v1/namespaces: (1.477293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.978074  108813 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (842.248µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:28.981879  108813 httplog.go:90] GET /healthz: (4.803509ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:28.982081  108813 httplog.go:90] POST /api/v1/namespaces: (3.695576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45012]
I0911 16:09:29.057521  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.057564  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.057577  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.057587  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.057596  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.057627  108813 httplog.go:90] GET /healthz: (258.066µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.082779  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.082811  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.082819  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.082826  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.082832  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.082867  108813 httplog.go:90] GET /healthz: (221.625µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.157799  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.157843  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.157855  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.157864  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.157872  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.157899  108813 httplog.go:90] GET /healthz: (254.743µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.182927  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.182969  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.182983  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.182994  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.183003  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.183061  108813 httplog.go:90] GET /healthz: (301.003µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.257619  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.257663  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.257677  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.257688  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.257697  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.257735  108813 httplog.go:90] GET /healthz: (305.876µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.282749  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.282787  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.282799  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.282809  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.282817  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.282844  108813 httplog.go:90] GET /healthz: (247.042µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.357565  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.357615  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.357629  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.357638  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.357645  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.357687  108813 httplog.go:90] GET /healthz: (274.57µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.382884  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.382926  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.382939  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.382949  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.382957  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.382989  108813 httplog.go:90] GET /healthz: (269.726µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.457515  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.457554  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.457567  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.457577  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.457585  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.457619  108813 httplog.go:90] GET /healthz: (257.689µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.482807  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.482847  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.482861  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.482871  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.482880  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.482913  108813 httplog.go:90] GET /healthz: (279.252µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.557617  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.557656  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.557670  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.557681  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.557691  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.557736  108813 httplog.go:90] GET /healthz: (285.244µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.582903  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.582950  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.582964  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.582976  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.582988  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.583024  108813 httplog.go:90] GET /healthz: (289.993µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.657515  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.657554  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.657567  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.657577  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.657585  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.657617  108813 httplog.go:90] GET /healthz: (257.804µs) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.682836  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:09:29.682879  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.682892  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.682902  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.682910  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.682952  108813 httplog.go:90] GET /healthz: (289.513µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.690602  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:09:29.690721  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:09:29.758738  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.758770  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.758782  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.758792  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.758835  108813 httplog.go:90] GET /healthz: (1.412623ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.783895  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.783929  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.783940  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.783949  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.783991  108813 httplog.go:90] GET /healthz: (1.364657ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.858714  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.858747  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.858756  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.858764  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.858801  108813 httplog.go:90] GET /healthz: (1.347543ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:29.883492  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.883522  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.883533  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.883542  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.883579  108813 httplog.go:90] GET /healthz: (991.955µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.958148  108813 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.498568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.958185  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (912.03µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45020]
I0911 16:09:29.958567  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.81509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I0911 16:09:29.959107  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.959133  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:09:29.959143  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:09:29.959150  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:09:29.959176  108813 httplog.go:90] GET /healthz: (1.533225ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:29.960862  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.706819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45020]
I0911 16:09:29.961099  108813 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.224425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45014]
I0911 16:09:29.961132  108813 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.482657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.961485  108813 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0911 16:09:29.963549  108813 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.763492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.963554  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.433084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.966424  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.514093ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.966557  108813 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (4.809368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45020]
I0911 16:09:29.966718  108813 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.825788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.967034  108813 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0911 16:09:29.967054  108813 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0911 16:09:29.967572  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (793.051µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45020]
I0911 16:09:29.968715  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (757.492µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.969891  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (759.195µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.971427  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (723.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.972806  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.021116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.974249  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (776.7µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.976643  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.059379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.976823  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0911 16:09:29.977896  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (953.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.980986  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.437045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.981155  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0911 16:09:29.982186  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (907.267µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.984415  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.880294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.984536  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:29.984571  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:29.984597  108813 httplog.go:90] GET /healthz: (2.15613ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:29.984798  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0911 16:09:29.986193  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.263549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.988289  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.751904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.988491  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0911 16:09:29.989568  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (796.745µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.991733  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.992506  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0911 16:09:29.993961  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (806.291µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.996189  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.620016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.996631  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0911 16:09:29.997784  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (969.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:29.999812  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.000002  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0911 16:09:30.001144  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.003479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.003197  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.705312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.003416  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0911 16:09:30.004493  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (936.194µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.007551  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.474541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.008048  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0911 16:09:30.009852  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.372997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.011994  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.806122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.012217  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0911 16:09:30.013148  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (793.629µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.017269  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.617066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.017564  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0911 16:09:30.018836  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.091296ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.021012  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.786296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.021326  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0911 16:09:30.022369  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (778.692µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.024460  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.728969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.024652  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0911 16:09:30.033452  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (8.658333ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.036044  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.020334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.036485  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0911 16:09:30.038001  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.123323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.040875  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.270581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.041274  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0911 16:09:30.042238  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (703.563µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.044256  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.044456  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0911 16:09:30.045615  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.033546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.047542  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.416453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.048053  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0911 16:09:30.049197  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (962µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.051260  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.584965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.051600  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 16:09:30.052770  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (858.497µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.054558  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.055710  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0911 16:09:30.057075  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (897.571µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.058264  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.058287  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.058330  108813 httplog.go:90] GET /healthz: (1.00254ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:30.060983  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.846005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.061365  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0911 16:09:30.062544  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.007383ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.064902  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.903312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.065147  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0911 16:09:30.066573  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.237673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.068968  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.823278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.069315  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0911 16:09:30.070883  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.35199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.073619  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.898247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.074022  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0911 16:09:30.075145  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (909.902µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.077624  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.077359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.077944  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0911 16:09:30.079473  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.220303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.081767  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.082081  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0911 16:09:30.083616  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.21869ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.105516  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (21.374252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.105827  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0911 16:09:30.106092  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.106110  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.106143  108813 httplog.go:90] GET /healthz: (21.283296ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.107820  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.72515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.110274  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.800999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.110675  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0911 16:09:30.112144  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.272909ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.114918  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.316233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.115227  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 16:09:30.116603  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.131204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.118779  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.858154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.119149  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 16:09:30.120333  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (927.162µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.123185  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.515277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.123397  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 16:09:30.124418  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (825.1µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.126377  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.126657  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 16:09:30.128407  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.566686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.131871  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.623306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.132207  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 16:09:30.133469  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (908.511µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.135327  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.534554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.135795  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 16:09:30.137484  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.316445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.139620  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.747078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.139876  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 16:09:30.140858  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (845.906µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.142689  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.142965  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 16:09:30.144232  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (930.369µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.146268  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.564249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.146531  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 16:09:30.147636  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (925.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.149926  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.790175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.150395  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 16:09:30.151542  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (906.232µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.153445  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.45412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.153761  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0911 16:09:30.154987  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (895.909µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.156614  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.156974  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 16:09:30.158174  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (913.437µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.158772  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.158831  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.158875  108813 httplog.go:90] GET /healthz: (1.687371ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:30.160854  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.166435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.161115  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0911 16:09:30.162457  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.202152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.164804  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.71362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.165092  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 16:09:30.166810  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.552691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.169465  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.015374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.169714  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 16:09:30.171781  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.838021ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.173920  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.174508  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 16:09:30.175894  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (935.816µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.178081  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.553378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.178434  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 16:09:30.179418  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (847.368µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.181224  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.181412  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 16:09:30.182451  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (919.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.183646  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.184190  108813 cacher.go:771] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0911 16:09:30.184368  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.591916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.184613  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0911 16:09:30.184988  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.185033  108813 httplog.go:90] GET /healthz: (2.540689ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.185650  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (836.075µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.188532  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.390344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.188958  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 16:09:30.190229  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (856.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.192633  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.044859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.192841  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0911 16:09:30.194024  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (918.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.197064  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.151699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.197413  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 16:09:30.198731  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.040735ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.200941  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.485092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.201164  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 16:09:30.204141  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (2.65662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.206920  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.958195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.207204  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 16:09:30.208357  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (959.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.219627  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.710362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.219997  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 16:09:30.238562  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.653973ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.260179  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.36169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.260359  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.260378  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.260406  108813 httplog.go:90] GET /healthz: (2.651441ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:30.260715  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 16:09:30.278907  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.801007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.288050  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.288081  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.288143  108813 httplog.go:90] GET /healthz: (1.501759ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.299581  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.676727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.299853  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0911 16:09:30.322804  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (5.952019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.339799  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.912923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.340133  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0911 16:09:30.361982  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (5.113394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.363211  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.363247  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.363284  108813 httplog.go:90] GET /healthz: (1.559394ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:30.380098  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.14168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.380572  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0911 16:09:30.383839  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.383866  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.383900  108813 httplog.go:90] GET /healthz: (1.283101ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.406330  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (9.421292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.419120  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.248249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.419873  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0911 16:09:30.438265  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.429581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.458479  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.458512  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.458550  108813 httplog.go:90] GET /healthz: (1.177832ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:30.459448  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.587881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.459636  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0911 16:09:30.478384  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.493269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.485992  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.486027  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.486063  108813 httplog.go:90] GET /healthz: (1.281597ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.503123  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.220245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.503412  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 16:09:30.518396  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.484587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.540092  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.209775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.540399  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0911 16:09:30.558289  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.558409  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.558452  108813 httplog.go:90] GET /healthz: (1.240512ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:30.558483  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.625152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.578744  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.926769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.579233  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0911 16:09:30.583529  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.583550  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.583581  108813 httplog.go:90] GET /healthz: (985.384µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.603611  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.694282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.619495  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.597726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.619771  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0911 16:09:30.638453  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.516943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.659947  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.041623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:30.660119  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.660150  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.660198  108813 httplog.go:90] GET /healthz: (2.395014ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:30.660364  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0911 16:09:30.678257  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.425636ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.683637  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.683669  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.683715  108813 httplog.go:90] GET /healthz: (1.166298ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.698916  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.121768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.699347  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 16:09:30.718753  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.522895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.741238  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.620393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.741505  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 16:09:30.758347  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.758378  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.758424  108813 httplog.go:90] GET /healthz: (1.198658ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:30.759271  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.301165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.779864  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.017175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.780177  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 16:09:30.783536  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.783562  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.783597  108813 httplog.go:90] GET /healthz: (1.072994ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.798642  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.58392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.819506  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.598006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.819746  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 16:09:30.838777  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.888242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.859868  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.859903  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.859940  108813 httplog.go:90] GET /healthz: (1.227774ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:30.860827  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.955475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.861057  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 16:09:30.878397  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.54229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.883833  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.883864  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.883904  108813 httplog.go:90] GET /healthz: (1.283826ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.899612  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.775462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.899883  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 16:09:30.918383  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.423198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.939374  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.529563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.939927  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 16:09:30.958268  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.958378  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.958421  108813 httplog.go:90] GET /healthz: (1.165551ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:30.958849  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.940773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.979665  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.827257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.979930  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 16:09:30.983850  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:30.983880  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:30.983919  108813 httplog.go:90] GET /healthz: (1.290318ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:30.998443  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.535197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.019180  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.29212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.019465  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 16:09:31.038444  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.54285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.058806  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.058838  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.058878  108813 httplog.go:90] GET /healthz: (1.490769ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.059188  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.321963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.059442  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 16:09:31.078607  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.739644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.083813  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.083844  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.083882  108813 httplog.go:90] GET /healthz: (1.293947ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.099664  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.73903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.099932  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0911 16:09:31.118687  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.69251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.139606  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.736143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.139975  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 16:09:31.158437  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.158470  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.158506  108813 httplog.go:90] GET /healthz: (1.212083ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.158917  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.027468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.179958  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.096046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.180598  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0911 16:09:31.190624  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.190665  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.190708  108813 httplog.go:90] GET /healthz: (2.125813ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.198201  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.340676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.219362  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.475427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.219597  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 16:09:31.238508  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.596994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.260165  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.260204  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.260254  108813 httplog.go:90] GET /healthz: (2.382531ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.260432  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.514295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.260625  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 16:09:31.280159  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.447394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.283451  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.283514  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.283550  108813 httplog.go:90] GET /healthz: (1.003141ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.299499  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.598641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.299771  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 16:09:31.318219  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.343923ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.339378  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.482588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.339683  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 16:09:31.358675  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.768273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.358936  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.358963  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.359002  108813 httplog.go:90] GET /healthz: (1.748157ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.381599  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.35925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.384025  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.384062  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.384098  108813 httplog.go:90] GET /healthz: (1.086875ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.384502  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 16:09:31.398052  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.13626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.419466  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.564036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.419749  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0911 16:09:31.443020  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.927777ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.461507  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.475211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.461641  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.461668  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.461721  108813 httplog.go:90] GET /healthz: (3.540439ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:31.461986  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 16:09:31.478653  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.633157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.483715  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.483745  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.483796  108813 httplog.go:90] GET /healthz: (1.152641ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.499274  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.332361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.499745  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0911 16:09:31.518248  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.410653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.540356  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.446305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.540837  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 16:09:31.558276  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.386079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.558575  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.558601  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.558656  108813 httplog.go:90] GET /healthz: (1.388914ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.579430  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.490117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.579697  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 16:09:31.583572  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.583608  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.583644  108813 httplog.go:90] GET /healthz: (1.078172ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.599269  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (2.03058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.619112  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.208126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.619618  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 16:09:31.638540  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.611927ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.660198  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.253992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:31.660509  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 16:09:31.661916  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.661944  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.661997  108813 httplog.go:90] GET /healthz: (2.709259ms) 0 [Go-http-client/1.1 127.0.0.1:45016]
I0911 16:09:31.678405  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.570424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.683573  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.683601  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.683636  108813 httplog.go:90] GET /healthz: (1.079404ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.699167  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.314942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.699460  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 16:09:31.718242  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.449532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.724037  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.375263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.741765  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.322017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.742010  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0911 16:09:31.758379  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.558035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.758626  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.758649  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.758681  108813 httplog.go:90] GET /healthz: (1.160062ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.760190  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.399813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.778876  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.993962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.779152  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 16:09:31.783506  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.783535  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.783569  108813 httplog.go:90] GET /healthz: (1.01999ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.798579  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.508682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.801026  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.983386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.819125  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.256779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.819449  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 16:09:31.842188  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.481624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.844443  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.696085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.858114  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.858156  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.858195  108813 httplog.go:90] GET /healthz: (930.084µs) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.859823  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.839533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.860012  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 16:09:31.877865  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.075924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.879218  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.002001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.883242  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.883270  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.883315  108813 httplog.go:90] GET /healthz: (780.83µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.899397  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.516404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.899679  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 16:09:31.918270  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.439084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.924498  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.782472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.938823  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.02672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.939079  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 16:09:31.960224  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.960266  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.960325  108813 httplog.go:90] GET /healthz: (1.821367ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:31.960604  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.10349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.962686  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.617954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.978760  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.908169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.979087  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 16:09:31.983803  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:31.983834  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:31.983873  108813 httplog.go:90] GET /healthz: (1.19873ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:31.998367  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.498946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.000336  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.521888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.020381  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.503977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.020657  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0911 16:09:32.038553  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.550098ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.040752  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.763531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.060161  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:32.060196  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:32.060233  108813 httplog.go:90] GET /healthz: (2.382689ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:32.060369  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.514674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.060630  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 16:09:32.078248  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.382446ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.080209  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.393726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.083513  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:32.083535  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:32.083564  108813 httplog.go:90] GET /healthz: (973.969µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.099096  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.202146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.099404  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 16:09:32.118472  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.47728ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.120225  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.305542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.139052  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.218116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.139341  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 16:09:32.158671  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:32.158720  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:32.158755  108813 httplog.go:90] GET /healthz: (1.263888ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:32.159014  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.196893ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.160721  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.199544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.181185  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.640968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.181497  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 16:09:32.183566  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:32.183596  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:32.183635  108813 httplog.go:90] GET /healthz: (1.058771ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.198438  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.499482ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.200697  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.546008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.219460  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.629029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.220034  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 16:09:32.242463  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.756657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.244904  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.849091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.259070  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:09:32.259112  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:09:32.259153  108813 httplog.go:90] GET /healthz: (1.285345ms) 0 [Go-http-client/1.1 127.0.0.1:45022]
I0911 16:09:32.259455  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.543672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.259969  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 16:09:32.284103  108813 httplog.go:90] GET /healthz: (1.356314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.286358  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.836894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.291962  108813 httplog.go:90] POST /api/v1/namespaces: (5.176237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.293851  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.4496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.298728  108813 httplog.go:90] POST /api/v1/namespaces/default/services: (4.229419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.300701  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.402103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.303535  108813 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.26555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.358749  108813 httplog.go:90] GET /healthz: (1.359513ms) 200 [Go-http-client/1.1 127.0.0.1:45016]
W0911 16:09:32.359818  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359855  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359892  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359914  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359926  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359936  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359948  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359969  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.359980  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.360048  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:09:32.360060  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 16:09:32.360080  108813 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0911 16:09:32.360090  108813 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0911 16:09:32.360666  108813 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.360688  108813 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.361040  108813 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.361051  108813 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.361354  108813 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.361366  108813 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.361614  108813 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (639.845µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:09:32.361821  108813 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.361838  108813 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362070  108813 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362088  108813 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362136  108813 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (674.928µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:32.362326  108813 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362342  108813 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362447  108813 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (502.743µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45176]
I0911 16:09:32.362611  108813 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362625  108813 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362663  108813 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362674  108813 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362710  108813 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (530.221µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45178]
I0911 16:09:32.362926  108813 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.362940  108813 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.363127  108813 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=28088 labels= fields= timeout=9m4s
I0911 16:09:32.363158  108813 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (347.185µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:09:32.363258  108813 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.363274  108813 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.363514  108813 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (437.014µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45184]
I0911 16:09:32.363643  108813 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.363658  108813 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 16:09:32.363671  108813 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (595.48µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45182]
I0911 16:09:32.363805  108813 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=28089 labels= fields= timeout=5m22s
I0911 16:09:32.364019  108813 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (389.053µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0911 16:09:32.364314  108813 get.go:250] Starting watch for /api/v1/pods, rv=28088 labels= fields= timeout=6m52s
I0911 16:09:32.364335  108813 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (435.217µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45194]
I0911 16:09:32.364472  108813 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (459.215µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45188]
I0911 16:09:32.364632  108813 get.go:250] Starting watch for /api/v1/services, rv=28222 labels= fields= timeout=9m15s
I0911 16:09:32.364714  108813 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28089 labels= fields= timeout=7m54s
I0911 16:09:32.364739  108813 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=28088 labels= fields= timeout=5m55s
I0911 16:09:32.365006  108813 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28089 labels= fields= timeout=7m14s
I0911 16:09:32.365057  108813 get.go:250] Starting watch for /api/v1/nodes, rv=28088 labels= fields= timeout=8m49s
I0911 16:09:32.365286  108813 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=28088 labels= fields= timeout=9m46s
I0911 16:09:32.365847  108813 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=28089 labels= fields= timeout=7m47s
I0911 16:09:32.366566  108813 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (3.310609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45176]
I0911 16:09:32.367551  108813 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=28089 labels= fields= timeout=6m54s
I0911 16:09:32.463339  108813 shared_informer.go:227] caches populated
I0911 16:09:32.563582  108813 shared_informer.go:227] caches populated
I0911 16:09:32.663820  108813 shared_informer.go:227] caches populated
I0911 16:09:32.764031  108813 shared_informer.go:227] caches populated
I0911 16:09:32.864245  108813 shared_informer.go:227] caches populated
I0911 16:09:32.964483  108813 shared_informer.go:227] caches populated
I0911 16:09:33.064700  108813 shared_informer.go:227] caches populated
I0911 16:09:33.164972  108813 shared_informer.go:227] caches populated
I0911 16:09:33.265204  108813 shared_informer.go:227] caches populated
I0911 16:09:33.365477  108813 shared_informer.go:227] caches populated
I0911 16:09:33.465708  108813 shared_informer.go:227] caches populated
I0911 16:09:33.565878  108813 shared_informer.go:227] caches populated
I0911 16:09:33.570999  108813 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0911 16:09:33.571442  108813 httplog.go:90] POST /api/v1/nodes: (4.424888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.574115  108813 httplog.go:90] POST /api/v1/nodes: (2.059256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.574528  108813 node_tree.go:93] Added node "test-node-1" in group "" to NodeTree
I0911 16:09:33.577254  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods: (2.108904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.577706  108813 scheduling_queue.go:830] About to try and schedule pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.577722  108813 scheduler.go:530] Attempting to schedule pod: bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.577977  108813 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0"
I0911 16:09:33.577992  108813 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0911 16:09:33.578045  108813 factory.go:606] Attempting to bind test-pod to test-node-0
I0911 16:09:33.580541  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod/binding: (2.292225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.580760  108813 scheduler.go:667] pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0911 16:09:33.583171  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/events: (1.973446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.680172  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (2.183794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.682400  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.432785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.698647  108813 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (5.15656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.701334  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.05887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.704390  108813 scheduling_queue.go:830] About to try and schedule pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.704415  108813 scheduler.go:530] Attempting to schedule pod: bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.704471  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods: (2.708979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.704730  108813 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0"
I0911 16:09:33.704883  108813 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0911 16:09:33.706875  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod/binding: (1.539598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.707164  108813 scheduler.go:667] pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0911 16:09:33.708918  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/events: (1.414871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.807361  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.860547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.809477  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.566752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.827568  108813 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (7.037925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.830258  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.105774ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.833250  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods: (2.274028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.833711  108813 scheduling_queue.go:830] About to try and schedule pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.833735  108813 scheduler.go:530] Attempting to schedule pod: bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.834043  108813 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0"
I0911 16:09:33.834077  108813 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0911 16:09:33.836910  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod/binding: (2.466797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.837192  108813 scheduler.go:667] pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0911 16:09:33.839492  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/events: (1.807697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.937947  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (3.894262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.940144  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.415379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.957129  108813 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (5.979269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.960047  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.271211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.962891  108813 httplog.go:90] POST /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods: (2.101468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.962930  108813 scheduling_queue.go:830] About to try and schedule pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.962946  108813 scheduler.go:530] Attempting to schedule pod: bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:09:33.963209  108813 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0"
I0911 16:09:33.963237  108813 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod", node "test-node-0": all PVCs bound and nothing to do
E0911 16:09:33.963316  108813 framework.go:457] bind plugin "bind-plugin-1" failed to bind pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod": failed to bind
I0911 16:09:33.963342  108813 scheduler.go:500] Failed to bind pod: bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
E0911 16:09:33.963360  108813 scheduler.go:658] error binding pod: Bind failure, code: 1: bind plugin "bind-plugin-1" failed to bind pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod": failed to bind
E0911 16:09:33.963378  108813 factory.go:557] Error scheduling bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod: Bind failure, code: 1: bind plugin "bind-plugin-1" failed to bind pod "bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod": failed to bind; retrying
I0911 16:09:33.963403  108813 factory.go:615] Updating pod condition for bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0911 16:09:33.966383  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/events: (2.032321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45208]
I0911 16:09:33.966765  108813 httplog.go:90] PUT /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod/status: (3.039233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:33.967000  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (2.623196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45206]
I0911 16:09:33.975589  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (1.707991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:42.290159  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.94406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:42.292631  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.026706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:42.294611  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.613405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:52.289841  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.610979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:52.291791  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.58177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:09:52.294055  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.861728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:02.290022  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.722789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:02.291947  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.499377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:02.293847  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.623121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:12.290147  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.660662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:12.291891  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.262346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:12.293453  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.189392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:22.290871  108813 httplog.go:90] GET /api/v1/namespaces/default: (2.189356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:22.293039  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.47399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:22.295079  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.303091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:28.984647  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.849285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:28.986869  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.775888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:28.988371  108813 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.195313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:32.290501  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.672988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:32.291940  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.048099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:32.293395  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.200602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:33.979739  108813 scheduling_queue.go:830] About to try and schedule pod bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:10:33.979778  108813 scheduler.go:526] Skip schedule deleting pod: bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod
I0911 16:10:33.982620  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/events: (2.542483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45206]
I0911 16:10:33.984012  108813 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (7.436311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:33.986520  108813 httplog.go:90] GET /api/v1/namespaces/bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/pods/test-pod: (992.549µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:33.986985  108813 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28088&timeout=6m52s&timeoutSeconds=412&watch=true: (1m1.622955016s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45178]
E0911 16:10:33.987087  108813 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0911 16:10:33.987287  108813 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28088&timeout=5m55s&timeoutSeconds=355&watch=true: (1m1.622854361s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45192]
I0911 16:10:33.987479  108813 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28089&timeout=7m14s&timeoutSeconds=434&watch=true: (1m1.622698779s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45188]
I0911 16:10:33.987612  108813 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28088&timeout=9m46s&timeoutSeconds=586&watch=true: (1m1.622673692s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45194]
I0911 16:10:33.987634  108813 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=28089&timeout=7m47s&timeoutSeconds=467&watch=true: (1m1.622010107s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45016]
I0911 16:10:33.987666  108813 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28089&timeout=6m54s&timeoutSeconds=414&watch=true: (1m1.620523344s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45176]
I0911 16:10:33.987777  108813 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28088&timeout=8m49s&timeoutSeconds=529&watch=true: (1m1.622958458s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45186]
I0911 16:10:33.987786  108813 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=28088&timeout=9m4s&timeoutSeconds=544&watch=true: (1m1.625051588s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45180]
I0911 16:10:33.987800  108813 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28089&timeout=5m22s&timeoutSeconds=322&watch=true: (1m1.624296225s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45190]
I0911 16:10:33.987902  108813 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=28222&timeout=9m15s&timeoutSeconds=555&watch=true: (1m1.623548987s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45182]
I0911 16:10:33.987919  108813 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28089&timeout=7m54s&timeoutSeconds=474&watch=true: (1m1.623443887s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45022]
I0911 16:10:33.996004  108813 httplog.go:90] DELETE /api/v1/nodes: (8.876464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:33.996259  108813 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0911 16:10:33.997710  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.164749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
I0911 16:10:34.000887  108813 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.713326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45202]
--- FAIL: TestBindPlugin (65.31s)
    framework_test.go:1028: test #3: Waiting for invoke event 2 timeout.
    framework_test.go:1028: test #3: Waiting for invoke event 3 timeout.

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190911-160226.xml

Find bind-plugin9203138d-585f-4f6c-9f5e-7c131e6a34c2/test-pod mentions in log files


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 34s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
W0911 16:11:05.511102  108813 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0911 16:11:05.511119  108813 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0911 16:11:05.511134  108813 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0911 16:11:05.511144  108813 master.go:259] Using reconciler: 
I0911 16:11:05.514024  108813 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.514600  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.514818  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.515843  108813 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0911 16:11:05.515898  108813 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.516180  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.516202  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.516337  108813 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0911 16:11:05.517889  108813 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 16:11:05.517940  108813 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.518339  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.518378  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.518478  108813 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 16:11:05.519259  108813 watch_cache.go:405] Replace watchCache (rev: 33734) 
I0911 16:11:05.519720  108813 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0911 16:11:05.519760  108813 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.519929  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.519958  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.520064  108813 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0911 16:11:05.521124  108813 watch_cache.go:405] Replace watchCache (rev: 33735) 
I0911 16:11:05.522243  108813 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0911 16:11:05.522616  108813 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.523104  108813 watch_cache.go:405] Replace watchCache (rev: 33735) 
I0911 16:11:05.523226  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.523247  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.522269  108813 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0911 16:11:05.524015  108813 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0911 16:11:05.524227  108813 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.524377  108813 watch_cache.go:405] Replace watchCache (rev: 33736) 
I0911 16:11:05.524431  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.524455  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.524559  108813 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0911 16:11:05.525172  108813 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0911 16:11:05.525199  108813 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0911 16:11:05.525376  108813 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.525496  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.525516  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.526142  108813 watch_cache.go:405] Replace watchCache (rev: 33736) 
I0911 16:11:05.526247  108813 watch_cache.go:405] Replace watchCache (rev: 33736) 
I0911 16:11:05.526992  108813 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0911 16:11:05.527126  108813 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.527272  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.527310  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.527397  108813 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0911 16:11:05.528240  108813 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0911 16:11:05.528368  108813 watch_cache.go:405] Replace watchCache (rev: 33737) 
I0911 16:11:05.528425  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.528492  108813 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0911 16:11:05.528776  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.528804  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.529352  108813 watch_cache.go:405] Replace watchCache (rev: 33737) 
I0911 16:11:05.529963  108813 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0911 16:11:05.530012  108813 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0911 16:11:05.530102  108813 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.530247  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.530261  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.530911  108813 watch_cache.go:405] Replace watchCache (rev: 33737) 
I0911 16:11:05.531015  108813 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0911 16:11:05.531043  108813 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0911 16:11:05.531178  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.531359  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.531391  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.532921  108813 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0911 16:11:05.533005  108813 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0911 16:11:05.533093  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.533220  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.533238  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.533578  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.534677  108813 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0911 16:11:05.534693  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.534823  108813 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.534945  108813 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0911 16:11:05.534959  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.534975  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.536108  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.536132  108813 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0911 16:11:05.536114  108813 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0911 16:11:05.536389  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.536526  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.536547  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.537101  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.537345  108813 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0911 16:11:05.537397  108813 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0911 16:11:05.537376  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.537606  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.537628  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.538326  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.539811  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.539842  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.540670  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.540768  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.540780  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.541278  108813 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0911 16:11:05.541339  108813 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0911 16:11:05.541360  108813 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0911 16:11:05.541748  108813 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.541987  108813 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.542581  108813 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.543033  108813 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.543505  108813 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.544279  108813 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.544752  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.544863  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.544984  108813 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.546184  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.546677  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.547238  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.547506  108813 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.548032  108813 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.548273  108813 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.548740  108813 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.548972  108813 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.549642  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.549886  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.550070  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.550274  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.550516  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.550685  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.550862  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.551366  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.551637  108813 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.552340  108813 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.553157  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.553667  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.554146  108813 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.555173  108813 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.555658  108813 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.556722  108813 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.557694  108813 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.558508  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.559457  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.559896  108813 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.560158  108813 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0911 16:11:05.560335  108813 master.go:461] Enabling API group "authentication.k8s.io".
I0911 16:11:05.560450  108813 master.go:461] Enabling API group "authorization.k8s.io".
I0911 16:11:05.560736  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.561116  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.561274  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.562359  108813 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 16:11:05.562472  108813 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 16:11:05.562670  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.562811  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.562943  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.567806  108813 watch_cache.go:405] Replace watchCache (rev: 33738) 
I0911 16:11:05.576810  108813 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 16:11:05.577271  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.577033  108813 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 16:11:05.578145  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.578244  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.579846  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.581440  108813 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 16:11:05.581468  108813 master.go:461] Enabling API group "autoscaling".
I0911 16:11:05.581647  108813 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.581726  108813 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 16:11:05.581818  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.581839  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.584148  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.585888  108813 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0911 16:11:05.586074  108813 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.586221  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.586242  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.586341  108813 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0911 16:11:05.587599  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.589354  108813 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0911 16:11:05.589392  108813 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0911 16:11:05.589530  108813 master.go:461] Enabling API group "batch".
I0911 16:11:05.589786  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.590074  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.590204  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.592432  108813 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0911 16:11:05.592554  108813 master.go:461] Enabling API group "certificates.k8s.io".
I0911 16:11:05.592520  108813 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0911 16:11:05.592888  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.593047  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.593075  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.593931  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.594930  108813 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 16:11:05.595090  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.595231  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.595250  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.595473  108813 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 16:11:05.597040  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.597541  108813 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 16:11:05.597625  108813 master.go:461] Enabling API group "coordination.k8s.io".
I0911 16:11:05.597663  108813 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0911 16:11:05.597582  108813 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 16:11:05.597873  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.598009  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.598027  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.598029  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.600125  108813 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 16:11:05.600165  108813 master.go:461] Enabling API group "extensions".
I0911 16:11:05.600307  108813 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 16:11:05.600350  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.600352  108813 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.607890  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.607931  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.610870  108813 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0911 16:11:05.611044  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.611165  108813 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0911 16:11:05.611264  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.611289  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.612704  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.615399  108813 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 16:11:05.615426  108813 master.go:461] Enabling API group "networking.k8s.io".
I0911 16:11:05.615469  108813 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.615607  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.615633  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.615724  108813 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 16:11:05.621634  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.621698  108813 watch_cache.go:405] Replace watchCache (rev: 33739) 
I0911 16:11:05.623837  108813 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0911 16:11:05.623870  108813 master.go:461] Enabling API group "node.k8s.io".
I0911 16:11:05.624068  108813 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.624239  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.624276  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.624562  108813 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0911 16:11:05.626130  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.630079  108813 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0911 16:11:05.630274  108813 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.630437  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.630458  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.630546  108813 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0911 16:11:05.631821  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.632522  108813 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0911 16:11:05.632546  108813 master.go:461] Enabling API group "policy".
I0911 16:11:05.632589  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.632745  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.632770  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.632862  108813 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0911 16:11:05.634153  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.634880  108813 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 16:11:05.635073  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.635255  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.635275  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.635389  108813 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 16:11:05.636176  108813 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 16:11:05.636209  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.636366  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.636386  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.636428  108813 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 16:11:05.636651  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.637311  108813 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 16:11:05.637399  108813 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 16:11:05.637467  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.637542  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.637592  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.637611  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.641124  108813 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 16:11:05.641177  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.641339  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.641359  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.641444  108813 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 16:11:05.645437  108813 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 16:11:05.645608  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.645750  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.645769  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.645863  108813 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 16:11:05.646391  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.647172  108813 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 16:11:05.647209  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.647334  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.647354  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.647439  108813 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 16:11:05.647456  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.648097  108813 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 16:11:05.648274  108813 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 16:11:05.648284  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.648423  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.648443  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.649142  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.649531  108813 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 16:11:05.649557  108813 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0911 16:11:05.651113  108813 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 16:11:05.651588  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.651891  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.651915  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.652238  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.652427  108813 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 16:11:05.652531  108813 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 16:11:05.652578  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.652717  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.652735  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.653660  108813 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 16:11:05.653679  108813 master.go:461] Enabling API group "scheduling.k8s.io".
I0911 16:11:05.653716  108813 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 16:11:05.653807  108813 master.go:450] Skipping disabled API group "settings.k8s.io".
I0911 16:11:05.653940  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.654091  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.654115  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.654230  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.654661  108813 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 16:11:05.654782  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.654821  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.655048  108813 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 16:11:05.655191  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.655216  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.655869  108813 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 16:11:05.655900  108813 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.656005  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.656023  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.656102  108813 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 16:11:05.656655  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.657452  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.657577  108813 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0911 16:11:05.657649  108813 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.657703  108813 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0911 16:11:05.657766  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.657781  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.658724  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.658992  108813 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0911 16:11:05.659072  108813 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0911 16:11:05.659142  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.659261  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.659281  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.659579  108813 watch_cache.go:405] Replace watchCache (rev: 33740) 
I0911 16:11:05.660388  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.660503  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.661172  108813 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 16:11:05.661349  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.661472  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.661490  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.661569  108813 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 16:11:05.662180  108813 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 16:11:05.662202  108813 master.go:461] Enabling API group "storage.k8s.io".
I0911 16:11:05.662257  108813 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 16:11:05.662374  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.662510  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.662528  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.664383  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.664394  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.664876  108813 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0911 16:11:05.664994  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.665077  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.665096  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.665125  108813 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0911 16:11:05.666102  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.666579  108813 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0911 16:11:05.666651  108813 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0911 16:11:05.666747  108813 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.666875  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.666891  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.668018  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.668215  108813 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0911 16:11:05.668374  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.668476  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.668493  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.668565  108813 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0911 16:11:05.669255  108813 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0911 16:11:05.669421  108813 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.669683  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.669709  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.669820  108813 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0911 16:11:05.670212  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.670674  108813 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0911 16:11:05.670699  108813 master.go:461] Enabling API group "apps".
I0911 16:11:05.670731  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.670858  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.670876  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.670912  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.670967  108813 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0911 16:11:05.671695  108813 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 16:11:05.671734  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.671859  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.671877  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.671950  108813 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 16:11:05.672799  108813 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 16:11:05.672830  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.672862  108813 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 16:11:05.672944  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.672956  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.673167  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.673585  108813 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 16:11:05.673663  108813 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 16:11:05.673656  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.673782  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.673806  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.674105  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.674363  108813 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 16:11:05.674382  108813 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0911 16:11:05.674424  108813 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.674436  108813 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 16:11:05.674516  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.674676  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:05.674698  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:05.675051  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.675518  108813 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 16:11:05.675546  108813 master.go:461] Enabling API group "events.k8s.io".
I0911 16:11:05.675791  108813 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.675879  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.675889  108813 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 16:11:05.676016  108813 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676259  108813 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676422  108813 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676526  108813 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676622  108813 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676812  108813 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676844  108813 watch_cache.go:405] Replace watchCache (rev: 33741) 
I0911 16:11:05.676911  108813 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.676998  108813 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.677083  108813 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.677935  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.678214  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.679018  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.679328  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.680106  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.680429  108813 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.681151  108813 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.681436  108813 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.682168  108813 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.682590  108813 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:11:05.682661  108813 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0911 16:11:05.683231  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.683415  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.683710  108813 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.684562  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.685508  108813 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.686471  108813 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.686899  108813 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.687824  108813 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.688735  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.689103  108813 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.689913  108813 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:11:05.690111  108813 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0911 16:11:05.691067  108813 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.691483  108813 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.692152  108813 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.693001  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.693613  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.694564  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.695424  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.696199  108813 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.696927  108813 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.697951  108813 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.698738  108813 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:11:05.698914  108813 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0911 16:11:05.699589  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.700401  108813 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:11:05.700559  108813 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0911 16:11:05.701186  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.701976  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.702414  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.703136  108813 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.703823  108813 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.704467  108813 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.705075  108813 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:11:05.705159  108813 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0911 16:11:05.706162  108813 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.706997  108813 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.707407  108813 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.708260  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.708641  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.709004  108813 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.709853  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.710209  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.710598  108813 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.711450  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.711836  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.712252  108813 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 16:11:05.712433  108813 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0911 16:11:05.712517  108813 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0911 16:11:05.719986  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.721146  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.723602  108813 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.724717  108813 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.725475  108813 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"46b9b5c7-b573-46a2-abe4-7b6d65bbd218", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 16:11:05.728684  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:05.728717  108813 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0911 16:11:05.728727  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:05.728738  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:05.728746  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:05.728754  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:05.728915  108813 httplog.go:90] GET /healthz: (366.405µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:05.729902  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.238007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49726]
I0911 16:11:05.732605  108813 httplog.go:90] GET /api/v1/services: (1.169257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49726]
I0911 16:11:05.736400  108813 httplog.go:90] GET /api/v1/services: (1.037342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49726]
I0911 16:11:05.738922  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:05.738942  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:05.738950  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:05.738956  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:05.738962  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:05.738984  108813 httplog.go:90] GET /healthz: (135.363µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:05.740521  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.459102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49726]
I0911 16:11:05.742888  108813 httplog.go:90] GET /api/v1/services: (1.129355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49730]
I0911 16:11:05.743065  108813 httplog.go:90] POST /api/v1/namespaces: (1.884817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49726]
I0911 16:11:05.743332  108813 httplog.go:90] GET /api/v1/services: (2.247404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:05.744751  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.301434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49726]
I0911 16:11:05.746607  108813 httplog.go:90] POST /api/v1/namespaces: (1.446705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:05.747771  108813 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (894.931µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:05.749246  108813 httplog.go:90] POST /api/v1/namespaces: (1.123879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:05.829672  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:05.829724  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:05.829737  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:05.829747  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:05.829755  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:05.829802  108813 httplog.go:90] GET /healthz: (298.893µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:05.840224  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:05.840269  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:05.840282  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:05.840309  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:05.840320  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:05.840375  108813 httplog.go:90] GET /healthz: (382.471µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:05.929825  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:05.929870  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:05.929883  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:05.929892  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:05.929901  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:05.929950  108813 httplog.go:90] GET /healthz: (323.347µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:05.940123  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:05.940167  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:05.940180  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:05.940190  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:05.940198  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:05.940228  108813 httplog.go:90] GET /healthz: (280.735µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.029758  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.029793  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.029806  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.029816  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.029824  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.029865  108813 httplog.go:90] GET /healthz: (274.117µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.040094  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.040149  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.040163  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.040174  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.040182  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.040223  108813 httplog.go:90] GET /healthz: (285.791µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.130231  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.130280  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.130362  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.130375  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.130407  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.130458  108813 httplog.go:90] GET /healthz: (508.873µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.140563  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.140613  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.140627  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.140638  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.140650  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.140690  108813 httplog.go:90] GET /healthz: (405.322µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.229880  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.229929  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.229941  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.229950  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.229959  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.230010  108813 httplog.go:90] GET /healthz: (429.309µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.240097  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.240135  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.240148  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.240158  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.240166  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.240195  108813 httplog.go:90] GET /healthz: (275.707µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.329803  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.329845  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.329858  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.329866  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.329874  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.329921  108813 httplog.go:90] GET /healthz: (360.674µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.340104  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.340170  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.340182  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.340194  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.340201  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.340267  108813 httplog.go:90] GET /healthz: (308.841µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.429693  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.429737  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.429750  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.429767  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.429775  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.429824  108813 httplog.go:90] GET /healthz: (279.512µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.440104  108813 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 16:11:06.440147  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.440164  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.440173  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.440181  108813 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.440240  108813 httplog.go:90] GET /healthz: (294.62µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.510555  108813 client.go:361] parsed scheme: "endpoint"
I0911 16:11:06.510645  108813 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 16:11:06.531221  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.531254  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.531272  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.531281  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.531366  108813 httplog.go:90] GET /healthz: (1.83271ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.541013  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.541047  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.541059  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.541068  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.541106  108813 httplog.go:90] GET /healthz: (1.127304ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.631031  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.631064  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.631075  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.631083  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.631123  108813 httplog.go:90] GET /healthz: (1.473671ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:06.642058  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.642090  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.642100  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.642109  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.642158  108813 httplog.go:90] GET /healthz: (2.274942ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.730492  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.730540  108813 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 16:11:06.730554  108813 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 16:11:06.730562  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 16:11:06.730594  108813 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.774244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.730609  108813 httplog.go:90] GET /healthz: (935.973µs) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:06.731083  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.584647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49730]
I0911 16:11:06.732577  108813 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.06866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49730]
I0911 16:11:06.734215  108813 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.305007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.734422  108813 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0911 16:11:06.734457  108813 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.553133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49730]
I0911 16:11:06.734997  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.743874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49752]
I0911 16:11:06.735704  108813 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.034753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.736107  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (683.877µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49730]
I0911 16:11:06.737805  108813 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.690398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.738000  108813 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0911 16:11:06.738017  108813 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0911 16:11:06.738047  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.556529ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49730]
I0911 16:11:06.739095  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (650.479µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.740322  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (865.715µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.740618  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.740647  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:06.740677  108813 httplog.go:90] GET /healthz: (719.833µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:06.741712  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (997.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.742721  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (735.379µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.743845  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (797.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.745012  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (820.806µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.746468  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.001394ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.748107  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.748328  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0911 16:11:06.749133  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (599.334µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.750599  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.073808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.750860  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0911 16:11:06.752771  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.71002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.754647  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.754932  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0911 16:11:06.755889  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (767.645µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.757517  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.264848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.757768  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0911 16:11:06.758865  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (884.552µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.760861  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.761075  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0911 16:11:06.762136  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (853.186µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.764899  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.406632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.765159  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0911 16:11:06.766667  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.241238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.768598  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.768897  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0911 16:11:06.770125  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (931.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.771949  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.35537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.772154  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0911 16:11:06.773206  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (754.972µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.775130  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.445626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.775509  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0911 16:11:06.776563  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (845.94µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.778653  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.778904  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0911 16:11:06.779952  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (785.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.781945  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.521749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.782132  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0911 16:11:06.783935  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (976.264µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.786592  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.182467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.786871  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0911 16:11:06.787855  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (778.868µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.789815  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.790040  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0911 16:11:06.791080  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (861.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.792859  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.793056  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0911 16:11:06.794121  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (842.329µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.795911  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.796217  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0911 16:11:06.797236  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (710.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.798969  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.799187  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0911 16:11:06.800251  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (804.363µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.802007  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.385425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.802215  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0911 16:11:06.803264  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (835.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.805154  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.805416  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 16:11:06.806388  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (784.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.808056  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.315264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.808273  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0911 16:11:06.809287  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (797.031µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.811886  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.085897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.812122  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0911 16:11:06.813098  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (699.829µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.814743  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.814961  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0911 16:11:06.816468  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (869.11µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.818382  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.417714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.818553  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0911 16:11:06.819647  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (841.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.821810  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.594963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.821973  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0911 16:11:06.822956  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (792.515µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.825055  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.760195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.825315  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0911 16:11:06.826709  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.177075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.828463  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.331036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.828661  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0911 16:11:06.829636  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (805.608µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.830105  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.830132  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:06.830183  108813 httplog.go:90] GET /healthz: (822.833µs) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:06.831859  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.634866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.832145  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0911 16:11:06.833314  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (852.552µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.835192  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.434925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.835460  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0911 16:11:06.836449  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (796.171µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.838278  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.490025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.838472  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 16:11:06.839609  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (862.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.840521  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.840543  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:06.840565  108813 httplog.go:90] GET /healthz: (775.028µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:06.841801  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.759947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.842008  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 16:11:06.844040  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.789516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.845854  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.34523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.846099  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 16:11:06.847136  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (836.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.848789  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.849063  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 16:11:06.850149  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (881.324µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.852073  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.295029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.852286  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 16:11:06.853245  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (733.718µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.855122  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.433962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.855403  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 16:11:06.856687  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (987.985µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.858881  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.494959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.859179  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 16:11:06.860266  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (864.035µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.862506  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.780042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.862740  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 16:11:06.863931  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (937.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.865826  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.455314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.866134  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 16:11:06.867370  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.003805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.869676  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.869924  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 16:11:06.871169  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.010354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.873377  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.676834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.873572  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0911 16:11:06.874780  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.010325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.877249  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.010562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.877541  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 16:11:06.878833  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.019936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.881005  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.632288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.881374  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0911 16:11:06.882869  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.284238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.885434  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.797092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.885717  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 16:11:06.886828  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (826.838µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.888790  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.496043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.888992  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 16:11:06.889949  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (747.317µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.891651  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.891963  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 16:11:06.893337  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (796.433µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.895480  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.70626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.895773  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 16:11:06.896925  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (885.03µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.898888  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.517345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.899063  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 16:11:06.900530  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.238737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.903431  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.47784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.903612  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0911 16:11:06.904970  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.096483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.907265  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.831457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.907464  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 16:11:06.908527  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (865.588µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.910474  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.51103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.910814  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0911 16:11:06.912129  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (971.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.914472  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.788403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.914751  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 16:11:06.916272  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.303185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.918942  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.599895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.919418  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 16:11:06.930539  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.418581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:06.930573  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.930614  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:06.930651  108813 httplog.go:90] GET /healthz: (1.088466ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:06.941345  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:06.941375  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:06.941408  108813 httplog.go:90] GET /healthz: (1.357051ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:06.951352  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.355353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:06.951736  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 16:11:06.970536  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.53016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:06.991442  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.447602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:06.991917  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 16:11:07.010514  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.543096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.031408  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.031444  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.031485  108813 httplog.go:90] GET /healthz: (1.897621ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:07.031716  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.756126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.032072  108813 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 16:11:07.041236  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.041284  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.041362  108813 httplog.go:90] GET /healthz: (1.480516ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.050352  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.416391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.071227  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.241592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.071609  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0911 16:11:07.090431  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.496015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.110884  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.951927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.111165  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0911 16:11:07.130329  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.371619ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.130843  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.130870  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.130897  108813 httplog.go:90] GET /healthz: (1.498987ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:07.140879  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.140917  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.140981  108813 httplog.go:90] GET /healthz: (960.259µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.151258  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.292708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.151553  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0911 16:11:07.170393  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.44945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.190712  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.867299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.190970  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0911 16:11:07.210325  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.337972ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.231169  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.24662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.231627  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0911 16:11:07.231728  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.232257  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.232639  108813 httplog.go:90] GET /healthz: (3.164395ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:07.240905  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.240933  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.240962  108813 httplog.go:90] GET /healthz: (1.128819ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.250777  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.716536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.273377  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.479717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.273808  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 16:11:07.290934  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.933911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.313508  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.460803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.313802  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0911 16:11:07.330505  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.330546  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.330604  108813 httplog.go:90] GET /healthz: (1.183087ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:07.331022  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.046809ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.345061  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.345095  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.345144  108813 httplog.go:90] GET /healthz: (5.057429ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.351067  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.165033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.351413  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0911 16:11:07.371013  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.045328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.394129  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.43279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.394464  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0911 16:11:07.410235  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.30538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.430504  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.430566  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.430618  108813 httplog.go:90] GET /healthz: (1.085476ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:07.431876  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.907562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.432120  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0911 16:11:07.440830  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.440864  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.440940  108813 httplog.go:90] GET /healthz: (1.008984ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.450258  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.311002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.471886  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.905307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.472253  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 16:11:07.491063  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (2.139312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.512657  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.963031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.512873  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 16:11:07.530047  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.103285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.530499  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.530531  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.530608  108813 httplog.go:90] GET /healthz: (977.579µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:07.543019  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.543051  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.543118  108813 httplog.go:90] GET /healthz: (3.217146ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.550867  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.551068  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 16:11:07.569837  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.052267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.591054  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.14775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.591357  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 16:11:07.610217  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.305858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.631212  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.631337  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.631486  108813 httplog.go:90] GET /healthz: (1.352727ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:07.631896  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.040068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.632087  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 16:11:07.640707  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.640737  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.640778  108813 httplog.go:90] GET /healthz: (957.409µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.650215  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.090868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.670726  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.787311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.670977  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 16:11:07.690025  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.094181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.712245  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.856787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.712692  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 16:11:07.730729  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.766354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.731668  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.731694  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.731722  108813 httplog.go:90] GET /healthz: (2.253685ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:07.741134  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.741213  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.741275  108813 httplog.go:90] GET /healthz: (1.171666ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.750983  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.025557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.751429  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 16:11:07.770647  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.685883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.791152  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.134649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.791451  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 16:11:07.810165  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.181963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.830671  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.825256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.830878  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 16:11:07.831206  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.831234  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.831287  108813 httplog.go:90] GET /healthz: (1.874956ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:07.840525  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.840555  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.840581  108813 httplog.go:90] GET /healthz: (791.164µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.851598  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.782833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.871034  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.07726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.871344  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0911 16:11:07.890955  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (2.094916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.910742  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.868881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.910980  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 16:11:07.930464  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.469553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:07.930759  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.930801  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.930831  108813 httplog.go:90] GET /healthz: (1.052722ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:07.941477  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:07.941528  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:07.941578  108813 httplog.go:90] GET /healthz: (1.323591ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.950926  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.968103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:07.951190  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0911 16:11:07.970393  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.380595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.017816  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (28.703178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.018330  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 16:11:08.019680  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.131441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.030490  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.030517  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.030553  108813 httplog.go:90] GET /healthz: (1.008924ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:08.031530  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.99594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.031772  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 16:11:08.040863  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.040891  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.040924  108813 httplog.go:90] GET /healthz: (1.021139ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.050634  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.285945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.070991  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.090069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.071241  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 16:11:08.090208  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.296657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.111339  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.253021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.111791  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 16:11:08.130460  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.455781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.131801  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.131836  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.131877  108813 httplog.go:90] GET /healthz: (1.605987ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:08.140937  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.140980  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.141023  108813 httplog.go:90] GET /healthz: (1.089142ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.152145  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.188355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.152493  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 16:11:08.170181  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.190904  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.999029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.191195  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0911 16:11:08.210331  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.349349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.230634  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.230669  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.230709  108813 httplog.go:90] GET /healthz: (1.202832ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:08.231989  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.039345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.232216  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 16:11:08.241212  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.241243  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.241286  108813 httplog.go:90] GET /healthz: (1.33502ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.250648  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.664426ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.273715  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.041833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.273983  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0911 16:11:08.290144  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.193932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.311428  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.44538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.312988  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 16:11:08.330203  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.328808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.330523  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.330660  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.330852  108813 httplog.go:90] GET /healthz: (1.385093ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:08.341171  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.341211  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.341252  108813 httplog.go:90] GET /healthz: (1.351374ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.351999  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.002995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.352286  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 16:11:08.370494  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.566634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.391710  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.761986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.392155  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 16:11:08.411269  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.294685ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.430830  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.430863  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.430918  108813 httplog.go:90] GET /healthz: (860.943µs) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:08.431370  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.46014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.431555  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 16:11:08.441169  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.441206  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.441246  108813 httplog.go:90] GET /healthz: (1.16683ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.451198  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (2.295045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.472157  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.210109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.472537  108813 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 16:11:08.490099  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.181095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.492186  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.504818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.511920  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.255495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.512151  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0911 16:11:08.531729  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.531765  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.531811  108813 httplog.go:90] GET /healthz: (1.966699ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:08.531830  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.895056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.533991  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.495973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.541056  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.541079  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.541108  108813 httplog.go:90] GET /healthz: (1.250909ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.552206  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.333864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.552494  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 16:11:08.569799  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (962.462µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.571428  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.248331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.590940  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.859024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.591249  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 16:11:08.610263  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.368004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.612322  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.513921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.630925  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.630953  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.630984  108813 httplog.go:90] GET /healthz: (904.55µs) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:08.632809  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.779889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.633041  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 16:11:08.640626  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.640657  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.640700  108813 httplog.go:90] GET /healthz: (790.918µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.650118  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.248765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.655096  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.392657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.671269  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.369858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.671905  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 16:11:08.692420  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (3.533525ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.694653  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.54926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.712107  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.191756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.712457  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 16:11:08.730854  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.930225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.731069  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.731107  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.731146  108813 httplog.go:90] GET /healthz: (1.515191ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:08.733408  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.075482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.741253  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.741291  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.741351  108813 httplog.go:90] GET /healthz: (1.477956ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.751520  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.457288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.751968  108813 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 16:11:08.770695  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.741862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.772753  108813 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.522159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.791481  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.353279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.791752  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 16:11:08.815244  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (6.316807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.817288  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.407673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.830727  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.830764  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.830823  108813 httplog.go:90] GET /healthz: (1.329903ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:08.831649  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.697143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.831914  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0911 16:11:08.841751  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.841792  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.841837  108813 httplog.go:90] GET /healthz: (1.878136ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.850937  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.942564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.854814  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.250625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.881945  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (12.983589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.882269  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 16:11:08.890915  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.95656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.893123  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.507336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.915346  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.45705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.916061  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 16:11:08.931957  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (2.966348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.934853  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.401402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:08.938159  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.938189  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.938232  108813 httplog.go:90] GET /healthz: (8.776734ms) 0 [Go-http-client/1.1 127.0.0.1:49754]
I0911 16:11:08.941857  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:08.941891  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:08.941958  108813 httplog.go:90] GET /healthz: (1.748415ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.951535  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.506435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.951842  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 16:11:08.971021  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.235801ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.973062  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.512322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.991389  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.377033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:08.991745  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 16:11:09.010832  108813 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.524157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.013122  108813 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.762643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.030992  108813 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 16:11:09.031032  108813 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 16:11:09.031078  108813 httplog.go:90] GET /healthz: (1.595548ms) 0 [Go-http-client/1.1 127.0.0.1:49724]
I0911 16:11:09.031275  108813 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.326722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.031534  108813 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 16:11:09.040933  108813 httplog.go:90] GET /healthz: (947.124µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.043117  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.612295ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.045649  108813 httplog.go:90] POST /api/v1/namespaces: (1.935241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.047365  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.278269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.053441  108813 httplog.go:90] POST /api/v1/namespaces/default/services: (5.437621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.055501  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.422845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.058220  108813 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.151291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.130575  108813 httplog.go:90] GET /healthz: (936.244µs) 200 [Go-http-client/1.1 127.0.0.1:49754]
W0911 16:11:09.131440  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131467  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131500  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131511  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131522  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131531  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131545  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131555  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131565  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131628  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 16:11:09.131650  108813 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 16:11:09.131674  108813 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0911 16:11:09.131717  108813 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0911 16:11:09.132137  108813 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132155  108813 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132465  108813 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132482  108813 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132538  108813 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132550  108813 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132812  108813 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.132832  108813 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.133005  108813 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.133025  108813 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.133523  108813 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.133541  108813 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.133853  108813 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (763.468µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:09.133854  108813 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (676.843µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50338]
I0911 16:11:09.133961  108813 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.133976  108813 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.134017  108813 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (392.911µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50342]
I0911 16:11:09.134348  108813 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.244515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.134428  108813 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.134442  108813 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.134559  108813 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (429.598µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50340]
I0911 16:11:09.134564  108813 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (452.799µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50344]
I0911 16:11:09.134820  108813 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.134835  108813 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.135619  108813 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (483.558µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:09.135627  108813 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (523.248µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50342]
I0911 16:11:09.135946  108813 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (512.317µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50350]
I0911 16:11:09.136160  108813 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=33737 labels= fields= timeout=8m1s
I0911 16:11:09.136233  108813 get.go:250] Starting watch for /api/v1/services, rv=34045 labels= fields= timeout=6m49s
I0911 16:11:09.136396  108813 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=33741 labels= fields= timeout=9m17s
I0911 16:11:09.136002  108813 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=33741 labels= fields= timeout=5m41s
I0911 16:11:09.136843  108813 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.136860  108813 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.136990  108813 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=33740 labels= fields= timeout=5m48s
I0911 16:11:09.137028  108813 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.137041  108813 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 16:11:09.137142  108813 get.go:250] Starting watch for /api/v1/pods, rv=33738 labels= fields= timeout=5m51s
I0911 16:11:09.137444  108813 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=33741 labels= fields= timeout=7m17s
I0911 16:11:09.137639  108813 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (549.227µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0911 16:11:09.137894  108813 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (481.899µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50346]
I0911 16:11:09.138282  108813 get.go:250] Starting watch for /api/v1/nodes, rv=33738 labels= fields= timeout=8m51s
I0911 16:11:09.138289  108813 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=33741 labels= fields= timeout=9m8s
I0911 16:11:09.138631  108813 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=33736 labels= fields= timeout=9m9s
I0911 16:11:09.140289  108813 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=33738 labels= fields= timeout=9m6s
I0911 16:11:09.232132  108813 shared_informer.go:227] caches populated
I0911 16:11:09.332343  108813 shared_informer.go:227] caches populated
I0911 16:11:09.432535  108813 shared_informer.go:227] caches populated
I0911 16:11:09.532733  108813 shared_informer.go:227] caches populated
I0911 16:11:09.632973  108813 shared_informer.go:227] caches populated
I0911 16:11:09.733171  108813 shared_informer.go:227] caches populated
I0911 16:11:09.833412  108813 shared_informer.go:227] caches populated
I0911 16:11:09.933677  108813 shared_informer.go:227] caches populated
I0911 16:11:10.033929  108813 shared_informer.go:227] caches populated
I0911 16:11:10.134171  108813 shared_informer.go:227] caches populated
I0911 16:11:10.234490  108813 shared_informer.go:227] caches populated
I0911 16:11:10.334669  108813 shared_informer.go:227] caches populated
I0911 16:11:10.338318  108813 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0911 16:11:10.338414  108813 httplog.go:90] POST /api/v1/nodes: (3.213547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.341071  108813 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods: (2.225919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.341622  108813 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod
I0911 16:11:10.341638  108813 scheduler.go:530] Attempting to schedule pod: preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod
I0911 16:11:10.341745  108813 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod", node "test-node-0"
I0911 16:11:10.341756  108813 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0911 16:11:10.341798  108813 framework.go:537] waiting for 30s for pod "waiting-pod" at permit
I0911 16:11:10.356254  108813 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods: (3.631354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.357139  108813 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/preemptor-pod
I0911 16:11:10.357168  108813 scheduler.go:530] Attempting to schedule pod: preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/preemptor-pod
I0911 16:11:10.357421  108813 factory.go:541] Unable to schedule preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0911 16:11:10.357508  108813 factory.go:615] Updating pod condition for preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0911 16:11:10.361078  108813 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod/status: (2.63529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0911 16:11:10.361180  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/events: (1.905095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0911 16:11:10.361771  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.323134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.364158  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.54814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0911 16:11:10.364688  108813 generic_scheduler.go:1211] Node test-node-0 is a potential node for preemption.
I0911 16:11:10.370337  108813 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod/status: (5.101194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.381573  108813 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/waiting-pod: (10.620284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.382652  108813 framework.go:547] rejected while waiting at permit: preempted
E0911 16:11:10.382831  108813 factory.go:557] Error scheduling preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod: rejected while waiting at permit: preempted; retrying
I0911 16:11:10.383006  108813 factory.go:615] Updating pod condition for preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod to (PodScheduled==False, Reason=Unschedulable)
I0911 16:11:10.390899  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/events: (6.766135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0911 16:11:10.393092  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/events: (10.10956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.393386  108813 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/waiting-pod/status: (9.173104ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0911 16:11:10.393580  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/waiting-pod: (9.655229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
E0911 16:11:10.393641  108813 scheduler.go:333] Error updating the condition of the pod preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod: Operation cannot be fulfilled on pods "waiting-pod": StorageError: invalid object, Code: 4, Key: /46b9b5c7-b573-46a2-abe4-7b6d65bbd218/pods/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 358e909e-85df-4af6-ab2b-424dc602dbee, UID in object meta: 
W0911 16:11:10.394752  108813 factory.go:587] A pod preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod no longer exists
I0911 16:11:10.459101  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.944526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.561388  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (4.319383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.660164  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.111579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.759101  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.959148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.859353  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.246992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:10.960562  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.419204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.059139  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.935901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.158873  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.826243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.263190  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (6.023879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.359157  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.063419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.458790  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.637199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.558932  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.900065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.658796  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.828766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.759615  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.559812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.859146  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.057691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:11.959059  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.944927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.058758  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.594359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.159288  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.24906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.259361  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.172802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.359023  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.980933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.459004  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.87419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.558887  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.823337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.660038  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.827366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.759386  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.6535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.859224  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.798373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:12.959012  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.857404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.059061  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.896727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.159056  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.977013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.259993  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.872018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.359069  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.001363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.460928  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.672741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.560250  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.117724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.658498  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.455517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.758701  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.641199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.859056  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.946233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:13.959106  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.983587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.058946  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.789821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.158623  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.606066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.258931  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.854958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.359134  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.088351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.458878  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.7106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.559072  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.886006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.659232  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.814161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.758845  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.8119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.859018  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.884956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:14.958984  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.825711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.059287  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.988677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.158930  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.810287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.259652  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.505993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.359657  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.443443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.458998  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.836314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.559200  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.996711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.659604  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.428315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.759392  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.232649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.858811  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.715965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:15.959015  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.020029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.059360  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.231173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.158775  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.692043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.259360  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.215686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.359208  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.126338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.459064  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.949904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.559674  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.552271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.659228  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.126185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.759985  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.668412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.858863  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.820161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:16.959514  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.398878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.059333  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.203489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.159227  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.039479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.258836  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.74426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.359224  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.987616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.473227  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (4.641461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.559684  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.52186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.659759  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.616495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.758979  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.859503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.858868  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.846517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:17.958996  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.882526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.058975  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.797376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.160399  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.887023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.260817  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.621931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.359354  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.060824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.459291  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.181249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.559201  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.103717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.659182  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.063312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.759099  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.993993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.859115  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.950296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:18.958990  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.835458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.043862  108813 httplog.go:90] GET /api/v1/namespaces/default: (2.130276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.045905  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.622059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.047624  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.205299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.059432  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.320958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.163921  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (6.783302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.259155  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.959452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.359510  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.281488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.459611  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.88512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.558915  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.759072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.659485  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.342946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.759243  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.049053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.858962  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.887862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:19.958691  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.616941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.059103  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.952545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.163861  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.795662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.258890  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.624492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.358782  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.705551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.459150  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.913161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.559022  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.872905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.658936  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.783449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.758802  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.67404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.858614  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.522061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:20.959205  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.898073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.059259  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.079439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.159097  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.729662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.259795  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.18965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.360089  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.034347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.458781  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.725311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.559122  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.817442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.658864  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.81296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.759278  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.088241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.859195  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.849136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:21.959110  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.955945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.059151  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.904273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.159199  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.051531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.259377  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.15882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.359156  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.059236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.458780  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.620306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.559144  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.932006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.658969  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.898003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.758797  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.715315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.861984  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (4.710975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:22.959201  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.040138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.059018  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.966285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.158655  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.618319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.259169  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.897409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.358601  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.529513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.458675  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.521426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.559184  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.007077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.659160  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.96504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.759119  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.905962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.859095  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.867936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:23.958996  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.831414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.059142  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.788684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.159535  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.369233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.258995  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.86726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.358900  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.779844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.458628  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.556669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.559034  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.866208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.659055  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.959944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.758964  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.787235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.859163  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.933345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:24.959050  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.861585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.058924  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.789438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.158979  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.889459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.259315  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.161159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.358929  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.758597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.459176  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.049691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.559251  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.078202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.658914  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.775618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.759202  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.040242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.858946  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.892592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:25.958817  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.615561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.059120  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.954686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.159440  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.259579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.259499  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.295467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.359483  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.321016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.458993  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.837052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.558934  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.752444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.658879  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.799388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.761106  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (4.081468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.859329  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.210932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:26.958633  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.589809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.058628  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.490678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.158652  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.616633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.258710  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.572956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.358487  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.458486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.461165  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (4.114748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.559421  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.256927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.658938  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.860974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.758743  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.659212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.859569  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.49397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:27.959089  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.954945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.058896  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.792656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.159000  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.920792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.259145  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.033104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.359030  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.901513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.459448  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.181426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.558887  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.732796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.658842  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.704967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.759209  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.012583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.859659  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.490259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:28.959059  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.960895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.043673  108813 httplog.go:90] GET /api/v1/namespaces/default: (1.765054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.045537  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.427545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.047700  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.694877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.058883  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.730453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.159036  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.949556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.258920  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.852745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.359367  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.196519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.459053  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.915201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.559202  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.106759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.658787  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.662685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.758740  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.628463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.859132  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.060084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:29.959645  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.450178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.062593  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.628592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.159101  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.041946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.259036  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.926366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.358832  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.728693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.458742  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.582789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.558978  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.844062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.659324  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.141099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.759015  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.865188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.858836  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.668455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:30.958937  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.795609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.058915  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.86705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.159087  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.95834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.260893  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.857797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.359109  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.849216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.459103  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.048371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.559123  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.041296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.659057  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.929123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.758820  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.670718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.858977  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.96257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:31.959217  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.083901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.058641  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.569523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.158509  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.488542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.258901  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.762774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.358687  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.534908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.458899  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.555366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.558635  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.591413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.659043  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.853747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.758884  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.685041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.858597  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.532769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:32.958902  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.77216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.059050  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.97284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.158802  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.718891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.258911  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.808873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.358711  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.633641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.459149  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.981112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.559461  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.969932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.659473  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.840229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.762048  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.44241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.865517  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.319866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:33.961942  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.347142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.062780  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (5.748025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.159115  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.002322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.259845  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.822748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.358762  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.677911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.458810  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.788044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.559095  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.941774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.660168  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.070884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.759018  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.890761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.858977  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.515572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:34.960318  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.114477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.061248  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (4.160168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.158994  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.847438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.258819  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.706213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.358856  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.79566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.459780  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.204241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.559672  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.57451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.658431  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.373273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.758882  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.824657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.858787  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.406389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:35.959281  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.169501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.058839  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.660509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.158739  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.703407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.259122  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.903188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.359184  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.126046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.458872  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.685837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.559195  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.998563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.658856  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.77335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.759386  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.263807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.862651  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (3.342568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:36.958837  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.758519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.059193  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.102235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.158677  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.567095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.258753  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.697548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.358749  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.668244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.459497  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.266124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.559430  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.267338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.659017  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.90709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.759181  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.055312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.858928  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.72005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:37.958841  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.760136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.058837  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.796204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.158775  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.72022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.259480  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.346984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.358772  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.680858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.458966  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.851857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.558804  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.696874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.658560  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.564686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.759226  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.105251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.858983  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.818279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:38.959080  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.870392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.044136  108813 httplog.go:90] GET /api/v1/namespaces/default: (2.120633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.046443  108813 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.868356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.048520  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.680497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.059270  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.174268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.158954  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.966309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.259192  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.000525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.359526  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.292007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.460732  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.402526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.559060  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.972855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.659165  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.021793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.758854  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.832071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.859147  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.915488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:39.959497  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.352588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.059116  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.979919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.161012  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.741555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.259014  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.943784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.359655  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (2.523664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.361698  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.442429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.363661  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/waiting-pod: (1.223239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.365574  108813 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/waiting-pod: (1.489766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.369139  108813 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/preemptor-pod
I0911 16:11:40.369193  108813 scheduler.go:526] Skip schedule deleting pod: preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/preemptor-pod
I0911 16:11:40.372265  108813 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/events: (2.599176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50618]
I0911 16:11:40.374450  108813 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (8.468896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.380372  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/waiting-pod: (4.41002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.383111  108813 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/pods/preemptor-pod: (1.001992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
E0911 16:11:40.384032  108813 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0911 16:11:40.384357  108813 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=33737&timeout=8m1s&timeoutSeconds=481&watch=true: (31.248544659s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50348]
I0911 16:11:40.384395  108813 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=33740&timeout=5m48s&timeoutSeconds=348&watch=true: (31.247927481s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50354]
I0911 16:11:40.384402  108813 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=34045&timeout=6m49s&timeoutSeconds=409&watch=true: (31.24846925s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50338]
I0911 16:11:40.384415  108813 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=33741&timeout=9m8s&timeoutSeconds=548&watch=true: (31.246370061s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0911 16:11:40.384430  108813 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=33736&timeout=9m9s&timeoutSeconds=549&watch=true: (31.246062357s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50346]
I0911 16:11:40.384536  108813 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=33741&timeout=9m17s&timeoutSeconds=557&watch=true: (31.248429609s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50340]
I0911 16:11:40.384563  108813 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=33741&timeout=5m41s&timeoutSeconds=341&watch=true: (31.248866646s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50344]
I0911 16:11:40.384569  108813 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=33741&timeout=7m17s&timeoutSeconds=437&watch=true: (31.247627804s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49724]
I0911 16:11:40.384596  108813 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=33738&timeout=9m6s&timeoutSeconds=546&watch=true: (31.244598903s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50358]
I0911 16:11:40.384617  108813 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=33738&timeout=8m51s&timeoutSeconds=531&watch=true: (31.248151861s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
I0911 16:11:40.384722  108813 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=33738&timeout=5m51s&timeoutSeconds=351&watch=true: (31.247900351s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49754]
I0911 16:11:40.390772  108813 httplog.go:90] DELETE /api/v1/nodes: (6.081496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.391091  108813 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0911 16:11:40.392874  108813 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.401458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0911 16:11:40.395539  108813 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.062669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
--- FAIL: TestPreemptWithPermitPlugin (34.89s)
    framework_test.go:1538: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190911-160226.xml

Find preempt-with-permit-plugin54c99beb-0a61-4a0a-ad46-9cd18dad0866/waiting-pod mentions in log files


Show 2861 Passed Tests