This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjohnSchnake: Add new flag for whitelisting node taints
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 13:23
Elapsed30m35s
Revision
Buildergke-prow-ssd-pool-1a225945-fqsc
Refs master:34791349
81043:b38feb05
pod87711985-be96-11e9-bc02-ae225b01b9ea
infra-commit6e5b38c23
pod87711985-be96-11e9-bc02-ae225b01b9ea
repok8s.io/kubernetes
repo-commite5c5c31c7f2d28971e042623cb78dbcc68f10776
repos{u'k8s.io/kubernetes': u'master:34791349d656a9f8e45b7093012e29ad08782ffa,81043:b38feb0551684cd7e3612286fa50c1623ab863c8'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 13:49:00.088168  110404 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 13:49:00.088198  110404 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 13:49:00.088210  110404 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 13:49:00.088221  110404 master.go:234] Using reconciler: 
I0814 13:49:00.090483  110404 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.090620  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.090632  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.090674  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.092115  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.092548  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.092799  110404 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 13:49:00.092816  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.092849  110404 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.092882  110404 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 13:49:00.093062  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.093074  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.093106  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.093152  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.093818  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.093960  110404 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 13:49:00.094007  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.094010  110404 watch_cache.go:405] Replace watchCache (rev: 29368) 
I0814 13:49:00.093996  110404 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.094073  110404 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 13:49:00.094115  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.094124  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.094154  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.094239  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.094476  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.094599  110404 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 13:49:00.094625  110404 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.094683  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.094692  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.094718  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.094761  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.094796  110404 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 13:49:00.095024  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.095140  110404 watch_cache.go:405] Replace watchCache (rev: 29369) 
I0814 13:49:00.095679  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.095824  110404 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 13:49:00.096097  110404 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.096200  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.096212  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.096226  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.096241  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.096282  110404 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 13:49:00.096284  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.097050  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.097179  110404 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 13:49:00.097377  110404 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.097414  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.097425  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.097449  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.097460  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.097489  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.097563  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.097593  110404 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 13:49:00.097695  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.098594  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.098705  110404 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 13:49:00.098866  110404 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.098932  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.098943  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.098976  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.099016  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.099048  110404 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 13:49:00.099234  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.099371  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.099524  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.099617  110404 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 13:49:00.099671  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.099747  110404 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 13:49:00.099741  110404 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.099801  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.099813  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.099850  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.100071  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.101288  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.101567  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.101781  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.101805  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.101894  110404 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 13:49:00.101947  110404 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 13:49:00.102035  110404 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.102100  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.102110  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.102140  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.102181  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.102918  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.103067  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.103124  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.103218  110404 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 13:49:00.103341  110404 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 13:49:00.104261  110404 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.104330  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.104327  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.104338  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.104365  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.104406  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.104995  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.105096  110404 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 13:49:00.105228  110404 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.105287  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.105294  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.105315  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.105361  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.105383  110404 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 13:49:00.105538  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.105873  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.105996  110404 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 13:49:00.106158  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.106220  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.106230  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.106261  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.106321  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.106349  110404 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 13:49:00.106538  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.106734  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.106762  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.106777  110404 watch_cache.go:405] Replace watchCache (rev: 29370) 
I0814 13:49:00.106855  110404 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 13:49:00.106997  110404 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 13:49:00.106986  110404 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.107148  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.107158  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.107186  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.107231  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.109894  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.110562  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.110050  110404 watch_cache.go:405] Replace watchCache (rev: 29371) 
I0814 13:49:00.110725  110404 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 13:49:00.110270  110404 watch_cache.go:405] Replace watchCache (rev: 29371) 
I0814 13:49:00.110897  110404 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.110977  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.110987  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.111022  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.111072  110404 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 13:49:00.111217  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.111659  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.111786  110404 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 13:49:00.111817  110404 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.111922  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.111934  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.111965  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.112013  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.112044  110404 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 13:49:00.112167  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.112411  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.112490  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.112517  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.112547  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.112590  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.112633  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.113176  110404 watch_cache.go:405] Replace watchCache (rev: 29372) 
I0814 13:49:00.115057  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.115272  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.115320  110404 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.115415  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.115425  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.115459  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.115530  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.116321  110404 watch_cache.go:405] Replace watchCache (rev: 29373) 
I0814 13:49:00.116839  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.117849  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.118758  110404 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 13:49:00.119191  110404 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 13:49:00.120606  110404 watch_cache.go:405] Replace watchCache (rev: 29373) 
I0814 13:49:00.121448  110404 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.121682  110404 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.122322  110404 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.123010  110404 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.125477  110404 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.126204  110404 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.126635  110404 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.126768  110404 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.126963  110404 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.127475  110404 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.128273  110404 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.128515  110404 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.129200  110404 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.129450  110404 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.129961  110404 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.130177  110404 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.130875  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.131145  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.131296  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.131404  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.131596  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.131727  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.131932  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.132620  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.132875  110404 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.133624  110404 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.134339  110404 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.134622  110404 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.134891  110404 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.135598  110404 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.135908  110404 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.136540  110404 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.137247  110404 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.137867  110404 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.138536  110404 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.138810  110404 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.138944  110404 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 13:49:00.138973  110404 master.go:434] Enabling API group "authentication.k8s.io".
I0814 13:49:00.138989  110404 master.go:434] Enabling API group "authorization.k8s.io".
I0814 13:49:00.139132  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.139237  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.139255  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.139301  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.139366  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.139753  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.139992  110404 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:49:00.140032  110404 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:49:00.139993  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.140166  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.140267  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.140281  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.140318  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.140392  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.141408  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.141962  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.142137  110404 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:49:00.142338  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.142410  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.142422  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.142837  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.142888  110404 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:49:00.142969  110404 watch_cache.go:405] Replace watchCache (rev: 29378) 
I0814 13:49:00.143158  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.143450  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.143568  110404 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 13:49:00.143587  110404 master.go:434] Enabling API group "autoscaling".
I0814 13:49:00.143742  110404 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.143772  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.143788  110404 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 13:49:00.143817  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.143826  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.143863  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.143915  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.147674  110404 watch_cache.go:405] Replace watchCache (rev: 29378) 
I0814 13:49:00.148022  110404 watch_cache.go:405] Replace watchCache (rev: 29378) 
I0814 13:49:00.148332  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.148544  110404 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 13:49:00.148747  110404 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.148880  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.148894  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.148937  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.149001  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.149052  110404 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 13:49:00.149271  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.150351  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.150547  110404 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 13:49:00.150573  110404 master.go:434] Enabling API group "batch".
I0814 13:49:00.150724  110404 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.150799  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.150811  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.150855  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.151008  110404 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 13:49:00.151265  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.151427  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.151686  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.151777  110404 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 13:49:00.151793  110404 master.go:434] Enabling API group "certificates.k8s.io".
I0814 13:49:00.151971  110404 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.152038  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.152046  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.152088  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.152124  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.152152  110404 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 13:49:00.152354  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.152624  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.152709  110404 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 13:49:00.152852  110404 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.153047  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.153117  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.153215  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.153234  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.153259  110404 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 13:49:00.153411  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.158589  110404 watch_cache.go:405] Replace watchCache (rev: 29388) 
I0814 13:49:00.159039  110404 watch_cache.go:405] Replace watchCache (rev: 29388) 
I0814 13:49:00.159273  110404 watch_cache.go:405] Replace watchCache (rev: 29388) 
I0814 13:49:00.159604  110404 watch_cache.go:405] Replace watchCache (rev: 29388) 
I0814 13:49:00.162009  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.162442  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.164078  110404 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 13:49:00.164216  110404 master.go:434] Enabling API group "coordination.k8s.io".
I0814 13:49:00.164543  110404 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.164771  110404 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 13:49:00.165285  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.165394  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.165525  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.165689  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.167118  110404 watch_cache.go:405] Replace watchCache (rev: 29388) 
I0814 13:49:00.167683  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.168248  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.168422  110404 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 13:49:00.168806  110404 master.go:434] Enabling API group "extensions".
I0814 13:49:00.169472  110404 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.170072  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.168745  110404 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 13:49:00.170246  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.170440  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.170489  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.171055  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.171177  110404 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 13:49:00.171238  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.171303  110404 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 13:49:00.171334  110404 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.171426  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.171436  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.171472  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.171531  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.172153  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.172308  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.172372  110404 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 13:49:00.172456  110404 master.go:434] Enabling API group "networking.k8s.io".
I0814 13:49:00.172513  110404 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.172581  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.172590  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.172623  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.172423  110404 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 13:49:00.172847  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.173644  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.173775  110404 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 13:49:00.173793  110404 master.go:434] Enabling API group "node.k8s.io".
I0814 13:49:00.173853  110404 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 13:49:00.173960  110404 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.174028  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.174038  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.174068  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.174110  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.174324  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.174429  110404 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 13:49:00.174590  110404 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.174650  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.174659  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.174686  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.174722  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.174750  110404 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 13:49:00.174953  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.175394  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.175595  110404 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 13:49:00.175620  110404 master.go:434] Enabling API group "policy".
I0814 13:49:00.175653  110404 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.175716  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.175726  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.175743  110404 watch_cache.go:405] Replace watchCache (rev: 29388) 
I0814 13:49:00.175757  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.175800  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.175862  110404 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 13:49:00.176202  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.176415  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.176525  110404 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 13:49:00.176691  110404 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.176750  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.176761  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.176789  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.176829  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.176868  110404 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 13:49:00.177085  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.177306  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.177409  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.177410  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.177701  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.177780  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.177902  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.178181  110404 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 13:49:00.178228  110404 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 13:49:00.178221  110404 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.178286  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.178296  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.178328  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.178441  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.178712  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.178835  110404 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 13:49:00.178949  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.178963  110404 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 13:49:00.179018  110404 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.179085  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.179095  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.179123  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.179233  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.179477  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.179583  110404 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 13:49:00.179625  110404 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.179676  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.179685  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.179708  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.179739  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.179765  110404 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 13:49:00.179973  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.180570  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.180632  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.180700  110404 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 13:49:00.180754  110404 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 13:49:00.180858  110404 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.180929  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.180938  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.180966  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.181061  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.181276  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.181373  110404 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 13:49:00.181396  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.181402  110404 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.181437  110404 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 13:49:00.181469  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.181480  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.181540  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.181654  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.181873  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.181963  110404 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 13:49:00.182107  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.182105  110404 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.182165  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.182176  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.182204  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.182283  110404 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 13:49:00.182302  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.183230  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.183282  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.183379  110404 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 13:49:00.183414  110404 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 13:49:00.183419  110404 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 13:49:00.185422  110404 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.186364  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.186386  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.186427  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.186483  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.186753  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.186866  110404 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 13:49:00.186988  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.187025  110404 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.187097  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.187109  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.187138  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.187227  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.187256  110404 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 13:49:00.187880  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.187994  110404 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 13:49:00.188009  110404 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 13:49:00.188144  110404 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 13:49:00.188283  110404 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.188368  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.188379  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.188408  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.188436  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.188471  110404 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 13:49:00.188743  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.189677  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.189859  110404 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 13:49:00.189998  110404 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.190035  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.190066  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.190071  110404 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 13:49:00.190076  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.190729  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.190793  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.190992  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.191165  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.191184  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.191267  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.191276  110404 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 13:49:00.191348  110404 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.191408  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.191418  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.191447  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.191519  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.191545  110404 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 13:49:00.191797  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.191907  110404 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 13:49:00.191916  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.191935  110404 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.191977  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.192028  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.192040  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.192068  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.192150  110404 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 13:49:00.192157  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.192266  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.192475  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.192584  110404 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 13:49:00.192740  110404 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.192811  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.192822  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.192910  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.192961  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.192989  110404 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 13:49:00.193116  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.193867  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.193986  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.194000  110404 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 13:49:00.194038  110404 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 13:49:00.194152  110404 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.194232  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.194248  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.194279  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.194334  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.194566  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.194659  110404 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 13:49:00.194680  110404 master.go:434] Enabling API group "storage.k8s.io".
I0814 13:49:00.194797  110404 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.194869  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.194878  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.194905  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.194944  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.194974  110404 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 13:49:00.194812  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.195269  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.195558  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.196378  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.196383  110404 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 13:49:00.196607  110404 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.196674  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.196683  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.196716  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.196802  110404 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 13:49:00.197079  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.197725  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.197988  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.197997  110404 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 13:49:00.198115  110404 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 13:49:00.198400  110404 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.198472  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.198480  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.198590  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.198778  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.199144  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.199282  110404 watch_cache.go:405] Replace watchCache (rev: 29393) 
I0814 13:49:00.199316  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.199596  110404 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 13:49:00.199680  110404 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 13:49:00.199958  110404 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.200058  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.200069  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.200126  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.200174  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.217966  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.218236  110404 watch_cache.go:405] Replace watchCache (rev: 29403) 
I0814 13:49:00.218405  110404 watch_cache.go:405] Replace watchCache (rev: 29403) 
I0814 13:49:00.218992  110404 watch_cache.go:405] Replace watchCache (rev: 29403) 
I0814 13:49:00.219163  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.217966  110404 watch_cache.go:405] Replace watchCache (rev: 29398) 
I0814 13:49:00.219862  110404 watch_cache.go:405] Replace watchCache (rev: 29398) 
I0814 13:49:00.219223  110404 watch_cache.go:405] Replace watchCache (rev: 29403) 
I0814 13:49:00.220582  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.220665  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.220789  110404 watch_cache.go:405] Replace watchCache (rev: 29398) 
I0814 13:49:00.219700  110404 watch_cache.go:405] Replace watchCache (rev: 29404) 
I0814 13:49:00.221174  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.220415  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.221333  110404 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 13:49:00.221771  110404 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.221875  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.221888  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.221924  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.222123  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.222212  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.222245  110404 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 13:49:00.222434  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.223408  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.223527  110404 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 13:49:00.223547  110404 master.go:434] Enabling API group "apps".
I0814 13:49:00.223587  110404 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.223710  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.223721  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.223748  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.224387  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.224408  110404 watch_cache.go:405] Replace watchCache (rev: 29404) 
I0814 13:49:00.224458  110404 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 13:49:00.224713  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.225135  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.225254  110404 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 13:49:00.225292  110404 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.225365  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.225376  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.225406  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.225440  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.225469  110404 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 13:49:00.225706  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.225858  110404 watch_cache.go:405] Replace watchCache (rev: 29404) 
I0814 13:49:00.225993  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.226089  110404 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 13:49:00.226124  110404 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.226218  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.226229  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.226258  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.226303  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.226331  110404 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 13:49:00.227754  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.228104  110404 watch_cache.go:405] Replace watchCache (rev: 29404) 
I0814 13:49:00.228113  110404 watch_cache.go:405] Replace watchCache (rev: 29399) 
I0814 13:49:00.230011  110404 watch_cache.go:405] Replace watchCache (rev: 29404) 
I0814 13:49:00.231117  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.231516  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.231680  110404 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 13:49:00.231857  110404 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.232025  110404 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 13:49:00.233091  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.233107  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.233343  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.233401  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.234975  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.235083  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.234996  110404 watch_cache.go:405] Replace watchCache (rev: 29407) 
I0814 13:49:00.235204  110404 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 13:49:00.235229  110404 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 13:49:00.235268  110404 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.235370  110404 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 13:49:00.235543  110404 client.go:354] parsed scheme: ""
I0814 13:49:00.235560  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:00.235596  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:00.235731  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.236081  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:00.236193  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:00.237680  110404 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 13:49:00.237711  110404 master.go:434] Enabling API group "events.k8s.io".
I0814 13:49:00.238242  110404 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.238522  110404 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.238805  110404 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.239008  110404 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 13:49:00.239196  110404 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.239419  110404 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.239546  110404 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.239805  110404 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.239918  110404 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.240006  110404 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.240093  110404 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.241236  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.241627  110404 watch_cache.go:405] Replace watchCache (rev: 29411) 
I0814 13:49:00.242113  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.243433  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.244292  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.245311  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.245634  110404 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.246809  110404 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.249454  110404 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.250713  110404 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.248941  110404 watch_cache.go:405] Replace watchCache (rev: 29416) 
I0814 13:49:00.251184  110404 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:00.251638  110404 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 13:49:00.252830  110404 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.253137  110404 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.253545  110404 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.254649  110404 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.255490  110404 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.256675  110404 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.257194  110404 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.258267  110404 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.259639  110404 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.260056  110404 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.261715  110404 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:00.261933  110404 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:00.262959  110404 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.263714  110404 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.264664  110404 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.265344  110404 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.266064  110404 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.266783  110404 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.267447  110404 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.268103  110404 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.268697  110404 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.269480  110404 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.270234  110404 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:00.270433  110404 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:00.271148  110404 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.271868  110404 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:00.272064  110404 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:00.272768  110404 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.273465  110404 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.273874  110404 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.274608  110404 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.275157  110404 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.275745  110404 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.276314  110404 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:00.276401  110404 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 13:49:00.277045  110404 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.277656  110404 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.277902  110404 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.278555  110404 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.279015  110404 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.279266  110404 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.279940  110404 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.280365  110404 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.280822  110404 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.281758  110404 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.282126  110404 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.282562  110404 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 13:49:00.282628  110404 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 13:49:00.282637  110404 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 13:49:00.284078  110404 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.284799  110404 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.285554  110404 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.286236  110404 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.287105  110404 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8c18970c-a0d1-420d-95ef-ac4c316be0d3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 13:49:00.290793  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.290836  110404 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 13:49:00.290855  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.290865  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.290875  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.290882  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.290923  110404 httplog.go:90] GET /healthz: (251.512µs) 0 [Go-http-client/1.1 127.0.0.1:57564]
I0814 13:49:00.293130  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.071955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.296950  110404 httplog.go:90] GET /api/v1/services: (1.288038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.301692  110404 httplog.go:90] GET /api/v1/services: (1.139152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.304263  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.304298  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.304320  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.304330  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.304338  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.304369  110404 httplog.go:90] GET /healthz: (217.794µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.306862  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.566667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57564]
I0814 13:49:00.307781  110404 httplog.go:90] GET /api/v1/services: (754.483µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57568]
I0814 13:49:00.307933  110404 httplog.go:90] GET /api/v1/services: (2.433714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.310304  110404 httplog.go:90] POST /api/v1/namespaces: (1.951406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57564]
E0814 13:49:00.310725  110404 factory.go:599] Error getting pod permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/test-pod for retry: Get http://127.0.0.1:44995/api/v1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/pods/test-pod: dial tcp 127.0.0.1:44995: connect: connection refused; retrying...
I0814 13:49:00.312116  110404 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.096305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.314234  110404 httplog.go:90] POST /api/v1/namespaces: (1.742867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.315656  110404 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (901.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.318222  110404 httplog.go:90] POST /api/v1/namespaces: (1.994392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.392936  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.392983  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.392996  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.393005  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.393016  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.393052  110404 httplog.go:90] GET /healthz: (258.055µs) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:00.405242  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.405276  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.405286  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.405293  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.405299  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.405345  110404 httplog.go:90] GET /healthz: (232.752µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.492031  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.492074  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.492087  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.492097  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.492105  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.492134  110404 httplog.go:90] GET /healthz: (291.893µs) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:00.505887  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.505937  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.505951  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.505963  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.505972  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.506014  110404 httplog.go:90] GET /healthz: (336.963µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.592036  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.592081  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.593348  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.593360  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.593369  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.593413  110404 httplog.go:90] GET /healthz: (1.524309ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:00.605113  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.605152  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.605166  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.605175  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.605183  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.605211  110404 httplog.go:90] GET /healthz: (245.091µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.691653  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.691696  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.691709  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.691720  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.691728  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.691779  110404 httplog.go:90] GET /healthz: (278.056µs) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:00.705174  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.705216  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.705229  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.705240  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.705248  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.705289  110404 httplog.go:90] GET /healthz: (266.592µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.791731  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.791774  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.791787  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.791798  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.791806  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.791837  110404 httplog.go:90] GET /healthz: (253.919µs) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:00.805213  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.805253  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.805267  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.805277  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.805302  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.805354  110404 httplog.go:90] GET /healthz: (280.27µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.891871  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.891913  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.891941  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.891961  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.891970  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.891999  110404 httplog.go:90] GET /healthz: (275.893µs) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:00.905237  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.905275  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.905287  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.905298  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.905306  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.905406  110404 httplog.go:90] GET /healthz: (309.904µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:00.991706  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:00.991751  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:00.991763  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:00.991773  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:00.991780  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:00.991830  110404 httplog.go:90] GET /healthz: (284.689µs) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.005215  110404 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 13:49:01.005253  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.005264  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.005271  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:01.005276  110404 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:01.005298  110404 httplog.go:90] GET /healthz: (209.862µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.089233  110404 client.go:354] parsed scheme: ""
I0814 13:49:01.089264  110404 client.go:354] scheme "" not registered, fallback to default scheme
I0814 13:49:01.089308  110404 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 13:49:01.089369  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:01.090033  110404 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 13:49:01.090122  110404 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 13:49:01.093941  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.093973  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.093984  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:01.093993  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:01.094052  110404 httplog.go:90] GET /healthz: (2.193502ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.107221  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.107252  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.107263  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:01.107271  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:01.107320  110404 httplog.go:90] GET /healthz: (1.306582ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.192553  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.192585  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.192595  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:01.192604  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:01.192671  110404 httplog.go:90] GET /healthz: (1.100163ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.207298  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.207354  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.207362  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:01.207373  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:01.207458  110404 httplog.go:90] GET /healthz: (2.440052ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.292081  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.114559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.292600  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.292621  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.292631  110404 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 13:49:01.292637  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 13:49:01.292669  110404 httplog.go:90] GET /healthz: (930.491µs) 0 [Go-http-client/1.1 127.0.0.1:57592]
I0814 13:49:01.292730  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.411298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.294051  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.335423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.294086  110404 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (3.18417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57568]
I0814 13:49:01.294948  110404 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.811311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.296240  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.839556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.297348  110404 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.04837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.299721  110404 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (5.034926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.299969  110404 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 13:49:01.301795  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (5.152994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.304120  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.163728ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.305013  110404 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (3.244713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.305829  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.305849  110404 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 13:49:01.305857  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.305883  110404 httplog.go:90] GET /healthz: (787.621µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.305898  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.373339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.308142  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.32784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.308294  110404 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.43312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.308740  110404 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 13:49:01.308766  110404 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 13:49:01.309349  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (844.587µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0814 13:49:01.312362  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.620933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.316033  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (2.182739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.321047  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.541772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.321775  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 13:49:01.323797  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (989.366µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.329422  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.063588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.329859  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 13:49:01.334231  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.172988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.342834  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.491268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.343150  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 13:49:01.345858  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (2.34643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.351238  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.744342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.351603  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 13:49:01.352914  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.047762ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.356088  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.846862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.356398  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 13:49:01.358292  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.550335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.361875  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.099868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.362254  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 13:49:01.364359  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.576916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.372288  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.269318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.372831  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 13:49:01.375083  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.072393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.378690  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.921715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.378920  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 13:49:01.381699  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.063384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.387174  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.844779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.387533  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 13:49:01.389077  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.300919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.393849  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.393880  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.393921  110404 httplog.go:90] GET /healthz: (1.553469ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.395292  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.437493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.395636  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 13:49:01.397168  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.16205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.399484  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.504161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.400579  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 13:49:01.403572  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (2.41752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.406533  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.406572  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.406603  110404 httplog.go:90] GET /healthz: (866.887µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.409080  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.143463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.409363  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 13:49:01.410546  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.039649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.417899  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.985116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.419175  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 13:49:01.422099  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (2.314514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.425651  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.743594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.426106  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 13:49:01.427867  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.552687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.430437  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.430838  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 13:49:01.436212  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (5.146742ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.439458  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.279606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.439972  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 13:49:01.442597  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (2.398687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.446847  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.395461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.447153  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 13:49:01.454936  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (3.490556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.458069  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.278547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.458311  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 13:49:01.459960  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.089389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.463517  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.002361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.464163  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 13:49:01.465362  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (945.422µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.468967  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.7797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.469158  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 13:49:01.472594  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (3.138246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.477737  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.683168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.478093  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 13:49:01.481183  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (2.885373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.483927  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.158078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.484302  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 13:49:01.486351  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.54062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.489636  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.388997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.490073  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 13:49:01.492422  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.492449  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.492474  110404 httplog.go:90] GET /healthz: (1.04749ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.493999  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (3.454244ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.496882  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.354156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.497122  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 13:49:01.498644  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.219424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.500907  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.883347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.501189  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 13:49:01.502972  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.286246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.506953  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.507165  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.507448  110404 httplog.go:90] GET /healthz: (2.588315ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.507166  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.660914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.508192  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 13:49:01.509898  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.376433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.512366  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.884572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.512762  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 13:49:01.514424  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.463808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.517107  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.854472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.517318  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 13:49:01.518402  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (806.271µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.521304  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.885838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.521718  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 13:49:01.522819  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (927.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.525063  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.843883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.525246  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 13:49:01.527222  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.486753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.530799  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.86234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.531294  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 13:49:01.534926  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (3.09264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.539255  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.697051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.539615  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 13:49:01.543120  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (3.040387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.548459  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.271697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.549052  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 13:49:01.556422  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (6.991065ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.560407  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.125317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.560862  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 13:49:01.562391  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.256847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.565481  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.392186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.565696  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 13:49:01.566973  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.067869ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.569886  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.450224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.570256  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 13:49:01.573063  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (2.6364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.576952  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.282156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.577219  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 13:49:01.578406  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (957.976µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.582844  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.821193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.583122  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 13:49:01.584808  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.390995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.588342  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.019549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.588699  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 13:49:01.590995  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.911398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.594142  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.823295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.594725  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 13:49:01.595964  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.031201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.598683  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.598911  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.599063  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.599342  110404 httplog.go:90] GET /healthz: (7.322257ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.599760  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 13:49:01.602605  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.535442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.604624  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.620594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.604943  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 13:49:01.606062  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.606165  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.606407  110404 httplog.go:90] GET /healthz: (1.467205ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.606299  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.096121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.610296  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.438842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.610593  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 13:49:01.612377  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.415533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.616972  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.958424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.617664  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 13:49:01.619648  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.742237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.623174  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.716254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.623782  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 13:49:01.627177  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (3.122367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.631086  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.923409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.631819  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 13:49:01.638437  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (6.145902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.642370  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.937737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.642832  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 13:49:01.644703  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.590915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.647588  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.287161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.647837  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 13:49:01.650330  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (2.192228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.655535  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.28445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.656124  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 13:49:01.657727  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.180107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.661882  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.281602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.662946  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 13:49:01.665123  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.91418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.669193  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.042745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.669852  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 13:49:01.673947  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (3.702305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.678361  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.196219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.678929  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 13:49:01.682055  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.867586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.684657  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.063585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.685199  110404 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 13:49:01.687171  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.673884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.689287  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.548324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.689813  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 13:49:01.700371  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.700406  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.700456  110404 httplog.go:90] GET /healthz: (8.984902ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.701725  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (11.532304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.705160  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.787318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.705347  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 13:49:01.706310  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.706337  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.706366  110404 httplog.go:90] GET /healthz: (1.159919ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0814 13:49:01.707262  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.29432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.717274  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.050362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.717706  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 13:49:01.736296  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (3.680582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.754783  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.382686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.755189  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 13:49:01.772735  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.73037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.806547  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.806596  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.806638  110404 httplog.go:90] GET /healthz: (1.41625ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:01.807377  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.937363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.807656  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 13:49:01.808215  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.808242  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.808276  110404 httplog.go:90] GET /healthz: (2.935306ms) 0 [Go-http-client/1.1 127.0.0.1:57566]
I0814 13:49:01.812288  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.155155ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.835054  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.743287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.835354  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 13:49:01.853212  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.050784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.873231  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.307093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.873983  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 13:49:01.892854  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.892892  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.892953  110404 httplog.go:90] GET /healthz: (1.475829ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:01.893327  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.388128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.906233  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.906269  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.906310  110404 httplog.go:90] GET /healthz: (1.32401ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.913796  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.748756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.914079  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 13:49:01.933247  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.134418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.954194  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.303464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.954869  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 13:49:01.972919  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.834954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.992935  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:01.994898  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:01.994040  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.04218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:01.996636  110404 httplog.go:90] GET /healthz: (5.199179ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:01.997567  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 13:49:02.006281  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.006317  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.006361  110404 httplog.go:90] GET /healthz: (1.374787ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.013663  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.587223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.034340  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.277862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.035302  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 13:49:02.052924  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.750353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.074158  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.765876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.074886  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 13:49:02.093279  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.093317  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.093373  110404 httplog.go:90] GET /healthz: (1.919647ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:02.093484  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.280353ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.107272  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.107331  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.107376  110404 httplog.go:90] GET /healthz: (1.272904ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.114393  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.264221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.114690  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 13:49:02.132944  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.000436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.154833  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.884153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.155077  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 13:49:02.172347  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.225099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.194546  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.194742  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.194794  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.718914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.194825  110404 httplog.go:90] GET /healthz: (3.374944ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:02.195083  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 13:49:02.206165  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.206199  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.206245  110404 httplog.go:90] GET /healthz: (1.168262ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.212204  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.298846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.234215  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.000986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.234659  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 13:49:02.253091  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.704161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.274077  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.050426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.274893  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 13:49:02.292146  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.224189ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.294492  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.294547  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.294583  110404 httplog.go:90] GET /healthz: (1.605784ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:02.307880  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.307915  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.307955  110404 httplog.go:90] GET /healthz: (2.93935ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.316844  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.709364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.317130  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 13:49:02.333172  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.977974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.357682  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.73213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.360487  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 13:49:02.372477  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.564814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.392475  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.392524  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.392561  110404 httplog.go:90] GET /healthz: (1.120054ms) 0 [Go-http-client/1.1 127.0.0.1:57592]
I0814 13:49:02.393304  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.405203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.393782  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 13:49:02.407009  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.407256  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.407450  110404 httplog.go:90] GET /healthz: (2.340041ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.412431  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.518016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.434807  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.667004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.435152  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 13:49:02.452447  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.564547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.473312  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.371047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.473563  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 13:49:02.493448  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.320391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.495826  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.495852  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.496296  110404 httplog.go:90] GET /healthz: (4.851925ms) 0 [Go-http-client/1.1 127.0.0.1:57592]
I0814 13:49:02.506970  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.506998  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.507041  110404 httplog.go:90] GET /healthz: (1.365513ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.514882  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.752342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.515154  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 13:49:02.535762  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (4.70348ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.554318  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.378415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.554649  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 13:49:02.580969  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (10.133239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.595350  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.595404  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.595556  110404 httplog.go:90] GET /healthz: (3.547511ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:02.596376  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.476935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.596605  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 13:49:02.606789  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.606852  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.606943  110404 httplog.go:90] GET /healthz: (1.885672ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.613282  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.810074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.634130  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.144436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.635213  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 13:49:02.653893  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.76519ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.673238  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.080711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.673745  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 13:49:02.693147  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.182675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.693195  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.693214  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.694174  110404 httplog.go:90] GET /healthz: (2.783216ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:02.706169  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.706202  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.706261  110404 httplog.go:90] GET /healthz: (1.19014ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.714040  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.19778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.714296  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 13:49:02.733430  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.167268ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.754297  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.954292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.754600  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 13:49:02.772201  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.373352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.793064  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.793098  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.793138  110404 httplog.go:90] GET /healthz: (1.706828ms) 0 [Go-http-client/1.1 127.0.0.1:57592]
I0814 13:49:02.795008  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.959585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.795223  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 13:49:02.806776  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.806819  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.806874  110404 httplog.go:90] GET /healthz: (1.894049ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.812753  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.630281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.834131  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.130031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.834470  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 13:49:02.852292  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.299903ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.873320  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.448244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.873927  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 13:49:02.892921  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.865081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:02.893675  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.893903  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.894183  110404 httplog.go:90] GET /healthz: (1.932538ms) 0 [Go-http-client/1.1 127.0.0.1:57592]
I0814 13:49:02.906252  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.906282  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.906331  110404 httplog.go:90] GET /healthz: (1.251438ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.915896  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.014484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.916226  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 13:49:02.932963  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.945367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.953614  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.378598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.954233  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 13:49:02.972960  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.876594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.993436  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:02.993922  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:02.994222  110404 httplog.go:90] GET /healthz: (2.599476ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:02.993593  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.027249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:02.995224  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 13:49:03.007142  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.007184  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.007260  110404 httplog.go:90] GET /healthz: (2.01203ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.020018  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (7.612062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.034083  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.335242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.034414  110404 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 13:49:03.052486  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.280352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.057347  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.138441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.089010  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (18.108794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.089518  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 13:49:03.102134  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.102169  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.102225  110404 httplog.go:90] GET /healthz: (10.800603ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:03.102726  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (11.978741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.105037  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.857652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.112957  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.112985  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.113035  110404 httplog.go:90] GET /healthz: (6.614108ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.117793  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.921636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.118004  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 13:49:03.132091  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.191083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.139600  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.597263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.153006  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.913602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.153263  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 13:49:03.172044  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.170981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.173814  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.358162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.194816  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.597382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.194922  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.194944  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.194990  110404 httplog.go:90] GET /healthz: (3.228153ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:03.195070  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 13:49:03.206388  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.206419  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.206487  110404 httplog.go:90] GET /healthz: (1.480559ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.212772  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.899019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.216273  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.786449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.234257  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.825155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.234553  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 13:49:03.253152  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.995502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.256647  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.735503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.274975  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.669362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.275283  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 13:49:03.292659  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.733891ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.292766  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.292788  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.292825  110404 httplog.go:90] GET /healthz: (1.163759ms) 0 [Go-http-client/1.1 127.0.0.1:57592]
I0814 13:49:03.294765  110404 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.630741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.306645  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.309774  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.309932  110404 httplog.go:90] GET /healthz: (4.854321ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.314146  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.843356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.314406  110404 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 13:49:03.338954  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (7.65346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.353617  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (14.025117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.357058  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.055139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.357299  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 13:49:03.372280  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.300698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.374601  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.820293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.393417  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.393451  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.393831  110404 httplog.go:90] GET /healthz: (1.825877ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:03.394241  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.302769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.394516  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 13:49:03.406057  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.406091  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.406146  110404 httplog.go:90] GET /healthz: (1.201651ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.413373  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.212884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.416266  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.474849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.434274  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.302126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.434563  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 13:49:03.452076  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.19548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.454297  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.750074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.479938  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.483059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.480227  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 13:49:03.492829  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.940002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.495727  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.495755  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.495807  110404 httplog.go:90] GET /healthz: (3.527178ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:03.516125  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.516160  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.516208  110404 httplog.go:90] GET /healthz: (11.271484ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.516653  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (23.397116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.519705  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.545253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.519972  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 13:49:03.532147  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.305084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.534236  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.636358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.553390  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.473949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.553695  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 13:49:03.572409  110404 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.406916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.576331  110404 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.471629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.595166  110404 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (4.213078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.595324  110404 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 13:49:03.595341  110404 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 13:49:03.595380  110404 httplog.go:90] GET /healthz: (3.464464ms) 0 [Go-http-client/1.1 127.0.0.1:57604]
I0814 13:49:03.595717  110404 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 13:49:03.606180  110404 httplog.go:90] GET /healthz: (1.147809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.608158  110404 httplog.go:90] GET /api/v1/namespaces/default: (1.612613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.631542  110404 httplog.go:90] POST /api/v1/namespaces: (22.92253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.633473  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.367531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.638316  110404 httplog.go:90] POST /api/v1/namespaces/default/services: (4.333116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.640071  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.328952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.643010  110404 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.581225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.692785  110404 httplog.go:90] GET /healthz: (1.291203ms) 200 [Go-http-client/1.1 127.0.0.1:57592]
W0814 13:49:03.694761  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694791  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694810  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694818  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694828  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694836  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694868  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694888  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694901  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694965  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 13:49:03.694975  110404 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 13:49:03.695005  110404 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 13:49:03.695017  110404 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 13:49:03.695473  110404 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.695491  110404 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.695846  110404 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.695857  110404 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.696159  110404 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.696169  110404 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.696577  110404 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (812.544µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:49:03.697857  110404 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (1.124454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57948]
I0814 13:49:03.698096  110404 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.368184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:49:03.698452  110404 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=29403 labels= fields= timeout=5m24s
I0814 13:49:03.698672  110404 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=29370 labels= fields= timeout=7m37s
I0814 13:49:03.698741  110404 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.698752  110404 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.698911  110404 get.go:250] Starting watch for /api/v1/services, rv=29720 labels= fields= timeout=5m52s
I0814 13:49:03.699048  110404 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699065  110404 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699203  110404 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699215  110404 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699420  110404 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699434  110404 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699473  110404 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699483  110404 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.699566  110404 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (421.612µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57950]
I0814 13:49:03.700856  110404 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (419.526µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57954]
I0814 13:49:03.700943  110404 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (316.762µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57958]
I0814 13:49:03.701151  110404 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (409.253µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57950]
I0814 13:49:03.701219  110404 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=29399 labels= fields= timeout=8m38s
I0814 13:49:03.701442  110404 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (493.623µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57956]
I0814 13:49:03.701765  110404 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=29370 labels= fields= timeout=5m50s
I0814 13:49:03.702088  110404 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=29404 labels= fields= timeout=7m39s
I0814 13:49:03.702090  110404 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29404 labels= fields= timeout=8m44s
I0814 13:49:03.702362  110404 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.702377  110404 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.703016  110404 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (319.6µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57962]
I0814 13:49:03.703775  110404 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.703790  110404 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.704069  110404 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=29373 labels= fields= timeout=9m8s
I0814 13:49:03.704184  110404 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.704195  110404 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 13:49:03.704750  110404 get.go:250] Starting watch for /api/v1/nodes, rv=29371 labels= fields= timeout=7m46s
I0814 13:49:03.704847  110404 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (397.724µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57964]
I0814 13:49:03.705306  110404 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (299.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57966]
I0814 13:49:03.705479  110404 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29393 labels= fields= timeout=6m29s
I0814 13:49:03.705861  110404 get.go:250] Starting watch for /api/v1/pods, rv=29371 labels= fields= timeout=5m2s
I0814 13:49:03.795661  110404 shared_informer.go:211] caches populated
I0814 13:49:03.895858  110404 shared_informer.go:211] caches populated
I0814 13:49:03.997373  110404 shared_informer.go:211] caches populated
I0814 13:49:04.098086  110404 shared_informer.go:211] caches populated
I0814 13:49:04.198532  110404 shared_informer.go:211] caches populated
I0814 13:49:04.298730  110404 shared_informer.go:211] caches populated
I0814 13:49:04.398922  110404 shared_informer.go:211] caches populated
I0814 13:49:04.499091  110404 shared_informer.go:211] caches populated
I0814 13:49:04.599322  110404 shared_informer.go:211] caches populated
I0814 13:49:04.698407  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.698670  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.699535  110404 shared_informer.go:211] caches populated
I0814 13:49:04.700999  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.701993  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.702036  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.704004  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.705892  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:04.799780  110404 shared_informer.go:211] caches populated
I0814 13:49:04.900122  110404 shared_informer.go:211] caches populated
I0814 13:49:04.903733  110404 httplog.go:90] POST /api/v1/nodes: (2.57852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:04.904309  110404 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 13:49:04.906946  110404 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods: (2.727024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:04.907248  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/waiting-pod
I0814 13:49:04.907261  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/waiting-pod
I0814 13:49:04.907390  110404 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/waiting-pod", node "test-node-0"
I0814 13:49:04.907406  110404 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 13:49:04.907456  110404 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 13:49:04.910775  110404 factory.go:615] Attempting to bind signalling-pod to test-node-0
I0814 13:49:04.910802  110404 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 13:49:04.911255  110404 scheduler.go:447] Failed to bind pod: permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod
E0814 13:49:04.911267  110404 scheduler.go:449] scheduler cache ForgetPod failed: pod 81e30124-a9c5-4aa6-b289-a003c2c1aa16 wasn't assumed so cannot be forgotten
E0814 13:49:04.913078  110404 scheduler.go:605] error binding pod: Post http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod/binding: dial tcp 127.0.0.1:42099: connect: connection refused
E0814 13:49:04.913110  110404 factory.go:566] Error scheduling permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod: Post http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod/binding: dial tcp 127.0.0.1:42099: connect: connection refused; retrying
I0814 13:49:04.913146  110404 factory.go:624] Updating pod condition for permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 13:49:04.914221  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
E0814 13:49:04.914796  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:42099/apis/events.k8s.io/v1beta1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/events: dial tcp 127.0.0.1:42099: connect: connection refused' (may retry after sleeping)
E0814 13:49:04.915613  110404 scheduler.go:280] Error updating the condition of the pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod: Put http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod/status: dial tcp 127.0.0.1:42099: connect: connection refused
I0814 13:49:04.917705  110404 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/waiting-pod/binding: (6.483008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:04.918116  110404 scheduler.go:614] pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 13:49:04.920197  110404 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/events: (1.665586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
E0814 13:49:05.114814  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
E0814 13:49:05.515397  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:05.698593  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.699069  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.701160  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.702163  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.702205  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.704153  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:05.706177  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:06.316315  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:06.698984  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.699261  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.701356  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.702314  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.702345  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.704308  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:06.706375  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.699195  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.699436  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.701555  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.702474  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.702718  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.706281  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:07.706535  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:07.917336  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:08.699641  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.699728  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.701933  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.702570  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.702986  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.706459  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:08.707374  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.699846  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.700366  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.702218  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.703132  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.703362  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.706795  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:09.707479  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:09.711247  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:44995/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/events: dial tcp 127.0.0.1:44995: connect: connection refused' (may retry after sleeping)
I0814 13:49:10.700111  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.700733  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.702387  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.703285  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.703727  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.706986  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:10.707975  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:11.118047  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:11.700471  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:11.702564  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:11.702934  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:11.703450  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:11.703877  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:11.707241  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:11.709036  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.700707  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.702917  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.703598  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.704027  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.704072  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.707415  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:12.709243  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:13.111398  110404 factory.go:599] Error getting pod permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/test-pod for retry: Get http://127.0.0.1:44995/api/v1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/pods/test-pod: dial tcp 127.0.0.1:44995: connect: connection refused; retrying...
I0814 13:49:13.609304  110404 httplog.go:90] GET /api/v1/namespaces/default: (2.423124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:13.612839  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.783608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:13.614839  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.635221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:13.700868  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:13.703092  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:13.703760  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:13.704184  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:13.704211  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:13.707582  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:13.709728  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.701087  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.703291  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.703938  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.704385  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.704430  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.707923  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:14.710099  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.701523  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.703488  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.704075  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.704804  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.704833  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.708147  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:15.710235  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:16.208217  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:42099/apis/events.k8s.io/v1beta1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/events: dial tcp 127.0.0.1:42099: connect: connection refused' (may retry after sleeping)
I0814 13:49:16.701923  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:16.703618  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:16.704243  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:16.704955  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:16.704996  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:16.708359  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:16.710412  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:17.518860  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:17.702122  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:17.703799  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:17.704399  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:17.705062  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:17.705118  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:17.708546  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:17.710586  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.702334  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.703968  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.704568  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.705226  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.705246  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.708687  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:18.710992  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.703079  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.704118  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.704723  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.705346  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.705381  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.708861  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:19.711189  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.703296  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.704492  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.705036  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.705528  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.705553  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.709028  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:20.713837  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:21.359269  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:44995/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/events: dial tcp 127.0.0.1:44995: connect: connection refused' (may retry after sleeping)
I0814 13:49:21.703767  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:21.705087  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:21.705219  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:21.705813  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:21.705856  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:21.709188  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:21.714035  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.703973  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.705234  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.705468  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.706102  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.706128  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.709379  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:22.714218  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.609230  110404 httplog.go:90] GET /api/v1/namespaces/default: (2.012446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:23.611328  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.620256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:23.612973  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.206166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:23.704206  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.705424  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.705945  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.706234  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.706293  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.709769  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:23.715841  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.704403  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.705574  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.706103  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.706382  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.706399  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.710165  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:24.716056  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.704986  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.705749  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.706299  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.706488  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.706569  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.710357  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:25.716324  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.705169  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.705878  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.706400  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.706609  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.706717  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.710581  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:26.717168  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:27.241325  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:42099/apis/events.k8s.io/v1beta1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/events: dial tcp 127.0.0.1:42099: connect: connection refused' (may retry after sleeping)
I0814 13:49:27.705386  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:27.706046  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:27.706568  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:27.706858  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:27.706957  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:27.710745  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:27.717398  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.705548  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.706240  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.706875  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.707136  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.707185  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.710939  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:28.717782  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.705798  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.706413  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.707035  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.707259  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.707349  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.711066  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:29.717998  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:30.319423  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:30.705992  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:30.706823  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:30.707120  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:30.707366  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:30.707459  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:30.711259  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:30.718201  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.706200  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.707121  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.707297  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.707535  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.707604  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.711761  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:31.718412  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:32.259719  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:44995/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/events: dial tcp 127.0.0.1:44995: connect: connection refused' (may retry after sleeping)
I0814 13:49:32.706435  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:32.707276  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:32.707446  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:32.707819  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:32.707841  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:32.711951  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:32.718621  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.609310  110404 httplog.go:90] GET /api/v1/namespaces/default: (1.920987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:33.611869  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.69689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:33.616293  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.899277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:33.706605  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.707448  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.707970  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.707998  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.708091  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.712127  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:33.718812  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.706822  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.707779  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.708037  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.708131  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.708312  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.712404  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.719029  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:34.911247  110404 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods: (3.115269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:34.911675  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:34.911699  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:34.911815  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:34.911865  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:34.915412  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.722453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:34.915758  110404 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod/status: (3.597318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
E0814 13:49:34.915979  110404 factory.go:590] pod is already present in the activeQ
I0814 13:49:34.916045  110404 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/events: (2.215366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0814 13:49:34.917695  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.268735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58004]
I0814 13:49:34.918142  110404 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 13:49:34.921207  110404 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod/status: (2.409729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0814 13:49:34.924595  110404 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/waiting-pod: (2.811103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0814 13:49:34.924946  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:34.924961  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:34.925172  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:34.925206  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:34.928312  110404 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/events: (3.108442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33794]
I0814 13:49:34.928447  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.317291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:34.928327  110404 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/events: (2.464623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33796]
I0814 13:49:34.928340  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.485924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.015216  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.288199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.114135  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.981429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.215366  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.043243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.317258  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.782139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.424524  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (12.222076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.514890  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.740842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.614917  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.417687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.707158  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.707963  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.708268  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.708301  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.708706  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.715026  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.150649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.716760  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.719227  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:35.719383  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:35.719403  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:35.719542  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:35.719725  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:35.722251  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.011785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.724656  110404 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/events/preemptor-pod.15bace3b63121a04: (4.073221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:35.725885  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.357924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.814869  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.730071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:35.915230  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.905082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.013705  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.580329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.114312  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.089453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.214748  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.669747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.316818  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.228592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.414195  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.746699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.514044  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.78201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.614950  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.994073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.701845  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:36.701903  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:36.702073  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:36.702134  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:36.704391  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.965525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.705293  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.579113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:36.707360  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.708087  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.708434  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.708451  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.709016  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.713896  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.759623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:36.717008  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.719419  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:36.719543  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:36.719555  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:36.719769  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:36.719802  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:36.722145  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.979369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:36.723100  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.008476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:36.813893  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.879348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:36.915336  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.29793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.013880  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.799934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.114022  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.968591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.214221  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.166791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.314014  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.871677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.413699  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.534882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.514256  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.196942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.613717  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.634821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.707805  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.708441  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.708721  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.708727  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.709361  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.714095  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.111031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.718416  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.719589  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:37.719745  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:37.719766  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:37.719943  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:37.719996  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:37.722391  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.134944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.722391  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.994493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:37.814001  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.994656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:37.914376  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.226151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.014254  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.047977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.114417  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.325783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.214734  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.681428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.314344  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.223702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.414308  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.171575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.514602  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.408611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.614301  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.176497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.708022  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:38.708168  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:38.708180  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:38.708340  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:38.708392  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:38.708689  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:38.708854  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:38.708876  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:38.709430  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:38.710948  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.975459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:38.710949  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.973035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
E0814 13:49:38.712113  110404 factory.go:599] Error getting pod permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/test-pod for retry: Get http://127.0.0.1:44995/api/v1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/pods/test-pod: dial tcp 127.0.0.1:44995: connect: connection refused; retrying...
I0814 13:49:38.719070  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (7.032363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.719239  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:38.719748  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:38.750595  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:42099/apis/events.k8s.io/v1beta1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/events: dial tcp 127.0.0.1:42099: connect: connection refused' (may retry after sleeping)
I0814 13:49:38.814273  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.133535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:38.914232  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.09488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.015483  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.361258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.114590  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.267608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.214351  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.207751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.314779  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.540071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.414322  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.240016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.516050  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.377055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.614647  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.546447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.708302  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.708894  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.708948  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.708962  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.709574  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.714007  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.872175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.719406  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.720034  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:39.720150  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:39.720166  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:39.720316  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:39.720390  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:39.723108  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.030955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:39.723211  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.720952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.813988  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.935338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:39.915116  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.955189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.014437  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.331826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.114830  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.50738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.217528  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.521667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.314706  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.564725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.413553  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.519278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.514245  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.185278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.614240  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.165035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.708722  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.709083  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.709106  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.709387  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.709947  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.717792  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (5.55951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.719571  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.721910  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:40.722183  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:40.722202  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:40.722339  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:40.722381  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:40.726603  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.27248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:40.726764  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.722652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:40.814102  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.033092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:40.914730  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.625507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.018530  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (5.70628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.115213  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.638287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.214361  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.125399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.314920  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.815341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.414410  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.944849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.515190  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.842658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.614171  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.986987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.708947  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.709187  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.709282  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.709775  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.710097  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.714160  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.07536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.719789  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.722206  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:41.722379  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:41.722402  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:41.722530  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:41.722572  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:41.724405  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.426116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:41.724571  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.132233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:41.814272  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.204847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:41.914361  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.266216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.014475  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.251566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.114770  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.402943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.223562  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (11.355526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.314863  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.491737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.414362  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.292057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
E0814 13:49:42.416350  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:44995/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/events: dial tcp 127.0.0.1:44995: connect: connection refused' (may retry after sleeping)
I0814 13:49:42.514810  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.66171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.614213  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.229293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.709149  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.709415  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.709718  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.709927  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.710257  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.714242  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.144391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.720015  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.722473  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:42.722976  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:42.723026  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:42.723332  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:42.723424  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:42.725761  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.726223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:42.727383  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.464049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.814736  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.489403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:42.914486  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.34853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.013936  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.888324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.115369  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.173645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.214348  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.314925  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.517054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.416245  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.174833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.514140  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.05877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.609795  110404 httplog.go:90] GET /api/v1/namespaces/default: (2.032331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.612061  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.789664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.614268  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.787659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.614798  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.559692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:43.709365  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.709547  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.709883  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.710083  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.710396  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.714214  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.137476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:43.720216  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.722912  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:43.723072  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:43.723106  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:43.723251  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:43.723297  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:43.727464  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.710672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:43.727632  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.849569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:43.814392  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.206853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:43.914771  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.572487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.015158  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.299252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.114320  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.219304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.216129  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.015774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.313991  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.878304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.415115  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.964534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.514214  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.913778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.614261  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.194857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.709658  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.709718  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.710076  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.710332  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.710640  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.714622  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.480154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.720447  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.723086  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:44.723310  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:44.723365  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:44.723625  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:44.723784  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:44.726668  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.529908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:44.726990  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.0104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:44.814314  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.100352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:44.913973  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.859269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.013544  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.554619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.114067  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.040647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.214231  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.773142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.314650  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.114884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.413993  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.926657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.514902  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.821317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.614413  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.21425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.709850  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.709983  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.710322  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.710519  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.710813  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.714063  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.967845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.721188  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.723383  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:45.723779  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:45.723804  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:45.723963  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:45.724015  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:45.726377  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.471381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:45.727677  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.275312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:45.814320  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.110191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:45.914491  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.390698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.014293  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.095894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.114773  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.562989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.213962  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.881644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.314385  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.269155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.414853  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.688027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.514532  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.473934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.614395  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.273673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.710126  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.710512  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.710663  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.710683  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.710940  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.718934  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (6.756516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.721383  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.723550  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:46.723725  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:46.723740  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:46.723853  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:46.723885  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:46.726214  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.957629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:46.726410  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.165984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:46.814923  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.010034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:46.914943  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.685888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.015128  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.920181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.114194  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.09192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.214045  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.963311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.314942  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.46083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.414084  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.022946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.514154  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.156395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.613918  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.843444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.710820  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.710858  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.711175  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.711298  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.711637  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.720628  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (7.956807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.721777  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.723742  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:47.723889  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:47.723903  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:47.724025  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:47.724063  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:47.726839  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.012395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:47.728346  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.341899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.814489  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.324128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:47.915099  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.93134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.014993  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.891837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.120650  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (8.271454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.214017  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.98581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.314116  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.956142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.414980  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.979817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.513729  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.665726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.614479  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.621063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.710997  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.710997  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.711275  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.711472  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.711850  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.714162  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.883436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.722042  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.723856  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:48.724030  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:48.724050  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:48.724206  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:48.724342  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:48.727663  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.811128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.728005  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.756508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:48.814451  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.147442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:48.914123  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.121718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.015274  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.204505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.114326  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.034284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.214903  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.453589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.316326  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.1643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.413961  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.83533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.515152  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.824506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.617702  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (5.621251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.711214  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.711219  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.711445  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.711562  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.711951  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.714016  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.943916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.722217  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.724202  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:49.724402  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:49.724778  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:49.725019  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:49.725078  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:49.727838  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.347411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:49.728900  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.948956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.814460  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.363481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:49.915545  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.486363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.015425  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.40222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
E0814 13:49:50.027034  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:42099/apis/events.k8s.io/v1beta1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/events: dial tcp 127.0.0.1:42099: connect: connection refused' (may retry after sleeping)
I0814 13:49:50.113993  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.972748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.215588  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.525713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.317782  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.612594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.413886  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.810447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.514023  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.03193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.614297  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.839737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.711384  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.711389  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.711754  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.711979  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.719222  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.720291  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (8.003668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.722388  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.725117  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:50.725339  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:50.725357  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:50.725532  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:50.725614  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:50.727869  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.962365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:50.728001  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.92674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:50.816402  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.23648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:50.914366  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.271029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.015365  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.226359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.115012  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.906327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.213777  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.699788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.319782  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (7.777204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.414009  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.983454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.514148  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.074047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.613685  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.629299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.711739  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.711740  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.711907  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.713154  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.716740  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.598961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.719459  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.722588  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.725298  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:51.725431  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:51.725445  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:51.725817  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:51.725874  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:51.727860  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.610551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:51.728048  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.716061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.814295  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.263313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:51.915172  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.960952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.013945  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.961833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.115059  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.951885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.214287  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.20772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.314967  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.62144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.414349  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.090507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.514309  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.124172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.616549  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.464626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.711924  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:52.711962  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:49:52.712116  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:44995/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/events: dial tcp 127.0.0.1:44995: connect: connection refused' (may retry after sleeping)
I0814 13:49:52.712199  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:52.713246  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:52.715112  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.017155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.719893  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:52.722823  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:52.726938  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:52.727085  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:52.727098  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:52.727236  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:52.727284  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:52.733840  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (5.809547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:52.733931  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (5.340741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.815142  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.055439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:52.918717  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (6.617729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.014311  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.986564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.113724  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.719101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.214552  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.531165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.315384  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.329307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.414185  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.080962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.514325  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.128056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.611181  110404 httplog.go:90] GET /api/v1/namespaces/default: (3.039449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.614654  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.445987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:53.615925  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.332344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.617810  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.540299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.712090  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.712123  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.712288  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.713699  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.714252  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.251521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.720096  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.722941  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.727177  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:53.727424  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:53.727458  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:53.727981  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:53.728059  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:53.730732  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.178267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:53.730756  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.399846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.815115  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.830252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:53.915052  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.648796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.015231  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.191925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.116133  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.135675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.215115  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.929647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.315193  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.055699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.414967  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.93146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.514004  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.76841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.614143  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.081169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.712878  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.712969  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.712991  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.714116  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.714909  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.688846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.721121  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.723379  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.727455  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:54.727936  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:54.727954  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:54.728197  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:54.729708  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:54.731863  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.898637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:54.731921  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.119423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.814041  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.898025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:54.923019  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (9.540774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.013785  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.463283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.114344  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.389707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.215209  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.099163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.315448  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.938085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.414571  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.489305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.513786  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.675195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.614293  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.831313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.713067  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.713186  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.713295  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.713896  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.861547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.714386  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.721317  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.723784  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.727977  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:55.728156  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:55.728179  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:55.728406  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:55.728487  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:55.732342  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.264154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:55.732364  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.270247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:55.815192  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.061563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:55.914312  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.854584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
E0814 13:49:55.920142  110404 factory.go:599] Error getting pod permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/signalling-pod for retry: Get http://127.0.0.1:42099/api/v1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/pods/signalling-pod: dial tcp 127.0.0.1:42099: connect: connection refused; retrying...
I0814 13:49:56.015153  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.964675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.114471  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.256932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.215570  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.536981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.314883  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.496936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.419607  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (7.423016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.514178  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.996996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.614482  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.343171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.713943  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.751201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.714731  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.719700  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.719742  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.719808  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.721469  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.727743  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.728161  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:56.728285  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:56.728297  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:56.728404  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:56.728465  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:56.731418  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.80576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.731790  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.674427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:56.814393  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.203807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:56.914101  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.066128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.015309  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.895952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.114964  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.803976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.214210  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.92884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.314080  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.725918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.415920  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.435324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.516700  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.346475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.614100  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.00073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.714966  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.719053  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (6.909805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.719880  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.719905  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.719923  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.721805  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.727967  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.728329  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:57.728729  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:57.728758  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:57.728899  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:57.728952  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:57.731694  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.440907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:57.732059  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.733754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:57.836955  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.71123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:57.915165  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.74607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.014659  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.480904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.114394  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.312233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.214727  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.585535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.314483  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.311097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.416755  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.818149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.519778  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (7.274662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.614648  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.367043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.714579  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.358259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.715178  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.720142  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.720142  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.720245  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.722117  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.728174  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.728881  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:58.729016  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:58.729030  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:58.729190  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:58.729247  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:58.731966  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.155558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:58.732869  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.221555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:58.813999  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.945229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:58.914774  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.567716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.015975  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.894373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.113865  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.69855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.214045  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.762136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.314948  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.699562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.414447  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.363441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.515040  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.824446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.614180  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.135053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.713987  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.946927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.715430  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.720406  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.720464  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.720476  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.722388  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.728414  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.729060  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:49:59.729211  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:59.729258  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:49:59.729381  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:49:59.729424  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:49:59.731596  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.592366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:49:59.732419  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.518139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.813579  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.539035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:49:59.914445  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.27139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.014626  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.471532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.114176  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.059935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.214579  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.001938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.313813  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.691124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.321295  110404 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.120588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.323093  110404 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.386534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.325280  110404 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.404736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.414053  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.936998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.514790  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.550343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.614695  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.401544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.715171  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.900774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.715905  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.720940  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.721042  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.721057  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.723437  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.729296  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.729456  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:00.729479  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:00.729633  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:50:00.729676  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:50:00.732546  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.056828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:00.732635  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:00.733879  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.719317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:00.815072  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.638523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
E0814 13:50:00.902091  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:42099/apis/events.k8s.io/v1beta1/namespaces/permit-plugin9011a568-ad50-4dc3-8bb7-6a223db3123d/events: dial tcp 127.0.0.1:42099: connect: connection refused' (may retry after sleeping)
I0814 13:50:00.915370  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.103942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.016715  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.710317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.115221  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.108172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.214474  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.414054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.314407  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.145113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.415114  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.846561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.514546  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.410096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.616431  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.184221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.716103  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.717557  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (5.500238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.721172  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.721274  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.721289  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.723610  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.729870  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.730029  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:01.730042  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:01.730172  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:50:01.730215  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:50:01.732387  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.875341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.732387  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.493971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:01.732922  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:01.814467  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.322014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:01.913757  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.766173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.014616  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.505557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.115094  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.899012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.216409  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.37467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.315030  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.759704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.414401  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.982966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.513908  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.810726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.615305  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.218589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.707216  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:02.707272  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:02.707445  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:50:02.707486  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:50:02.711235  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.839896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.711899  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.496947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:02.714290  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.000728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:02.716292  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:02.721450  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:02.721751  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:02.721910  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:02.723982  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:02.730144  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:02.730312  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:02.730327  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:02.730602  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:50:02.730662  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:50:02.733171  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.159976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.733215  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.264164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:02.733328  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 13:50:02.793388  110404 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:44995/apis/events.k8s.io/v1beta1/namespaces/permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/events: dial tcp 127.0.0.1:44995: connect: connection refused' (may retry after sleeping)
I0814 13:50:02.814028  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.872933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:02.915135  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.107845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.014580  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.217683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.116715  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.418253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.213924  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.904923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.314178  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.569433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.414183  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.084376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.514976  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.937124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.610793  110404 httplog.go:90] GET /api/v1/namespaces/default: (2.867942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.614212  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.919314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:03.615258  110404 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (3.888877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.617105  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.478802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.714464  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.192682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.716385  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.721619  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.721906  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.722131  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.724214  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.730365  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.730747  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:03.730776  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:03.730955  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:50:03.731003  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:50:03.733465  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.953464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:03.733874  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:03.734433  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.866034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.814413  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.388584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:03.913941  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.921105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.014281  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.20804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.115103  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.018893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.215286  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (3.086142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.314162  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.96162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.413880  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.881223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.515977  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (4.028013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.613915  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.943559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.714312  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.222752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.716951  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.721921  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.722215  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.722235  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.722303  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:04.722311  110404 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:04.722504  110404 factory.go:550] Unable to schedule preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 13:50:04.722539  110404 factory.go:624] Updating pod condition for preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 13:50:04.724381  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.725747  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.288452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.725816  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.841986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.730584  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.733995  110404 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 13:50:04.814091  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (2.056989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.914089  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.971482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.916421  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.52757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.918679  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/waiting-pod: (1.830349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.925053  110404 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/waiting-pod: (5.992214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.929830  110404 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:04.929870  110404 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/preemptor-pod
I0814 13:50:04.932552  110404 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/events: (2.40244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33792]
I0814 13:50:04.934230  110404 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (8.4816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.936867  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/waiting-pod: (925.182µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.939654  110404 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugincd1902d3-67bd-4c3d-9abd-351b7eff9600/pods/preemptor-pod: (1.306936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
E0814 13:50:04.940383  110404 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 13:50:04.940813  110404 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29404&timeout=7m39s&timeoutSeconds=459&watch=true: (1m1.238871859s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57954]
I0814 13:50:04.940829  110404 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29370&timeout=7m37s&timeoutSeconds=457&watch=true: (1m1.242517158s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57948]
I0814 13:50:04.940848  110404 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29404&timeout=8m44s&timeoutSeconds=524&watch=true: (1m1.239024232s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57952]
I0814 13:50:04.940857  110404 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=29399&timeout=8m38s&timeoutSeconds=518&watch=true: (1m1.239835802s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57960]
I0814 13:50:04.940979  110404 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29370&timeout=5m50s&timeoutSeconds=350&watch=true: (1m1.239420274s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57950]
I0814 13:50:04.940994  110404 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29720&timeout=5m52s&timeoutSeconds=352&watch=true: (1m1.242415273s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0814 13:50:04.941010  110404 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=29373&timeout=9m8s&timeoutSeconds=548&watch=true: (1m1.23723872s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57962]
I0814 13:50:04.941031  110404 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29403&timeout=5m24s&timeoutSeconds=324&watch=true: (1m1.242800702s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0814 13:50:04.941068  110404 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29393&timeout=6m29s&timeoutSeconds=389&watch=true: (1m1.235831053s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57964]
I0814 13:50:04.941105  110404 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29371&timeout=7m46s&timeoutSeconds=466&watch=true: (1m1.236688303s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57956]
I0814 13:50:04.941122  110404 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=29371&timeout=5m2s&timeoutSeconds=302&watch=true: (1m1.235533887s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57966]
I0814 13:50:04.946754  110404 httplog.go:90] DELETE /api/v1/nodes: (5.610991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.946985  110404 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 13:50:04.948894  110404 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.106259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
I0814 13:50:04.950982  110404 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.78213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33798]
--- FAIL: TestPreemptWithPermitPlugin (64.86s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-134128.xml

Find permit-plugin5efb5f77-e598-43ad-abd1-d06679ec2f70/test-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 703 lines ...
W0814 13:34:57.169] I0814 13:34:57.097428   53048 core.go:185] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0814 13:34:57.169] W0814 13:34:57.097441   53048 controllermanager.go:527] Skipping "route"
W0814 13:34:57.169] I0814 13:34:57.098278   53048 controllermanager.go:535] Started "serviceaccount"
W0814 13:34:57.170] W0814 13:34:57.098389   53048 controllermanager.go:514] "bootstrapsigner" is disabled
W0814 13:34:57.170] I0814 13:34:57.098361   53048 serviceaccounts_controller.go:117] Starting service account controller
W0814 13:34:57.170] I0814 13:34:57.098818   53048 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0814 13:34:57.171] E0814 13:34:57.099984   53048 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 13:34:57.171] W0814 13:34:57.100094   53048 controllermanager.go:527] Skipping "service"
W0814 13:34:57.172] I0814 13:34:57.100708   53048 node_lifecycle_controller.go:77] Sending events to api server
W0814 13:34:57.172] E0814 13:34:57.101178   53048 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 13:34:57.172] W0814 13:34:57.101437   53048 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 13:34:57.173] I0814 13:34:57.102246   53048 controllermanager.go:535] Started "clusterrole-aggregation"
W0814 13:34:57.173] W0814 13:34:57.102485   53048 controllermanager.go:527] Skipping "root-ca-cert-publisher"
W0814 13:34:57.173] I0814 13:34:57.102309   53048 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0814 13:34:57.173] I0814 13:34:57.103080   53048 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
W0814 13:34:57.174] I0814 13:34:57.103663   53048 controllermanager.go:535] Started "podgc"
... skipping 96 lines ...
W0814 13:34:57.612] I0814 13:34:57.611840   53048 controller_utils.go:1029] Waiting for caches to sync for HPA controller
W0814 13:34:57.612] I0814 13:34:57.611070   53048 controllermanager.go:535] Started "csrcleaner"
W0814 13:34:57.613] I0814 13:34:57.611083   53048 cleaner.go:81] Starting CSR cleaner controller
W0814 13:34:57.613] I0814 13:34:57.613164   53048 controllermanager.go:535] Started "pv-protection"
W0814 13:34:57.614] I0814 13:34:57.613961   53048 pv_protection_controller.go:82] Starting PV protection controller
W0814 13:34:57.625] I0814 13:34:57.625082   53048 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
W0814 13:34:57.626] W0814 13:34:57.626005   53048 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 13:34:57.669] I0814 13:34:57.669272   53048 controller_utils.go:1036] Caches are synced for deployment controller
W0814 13:34:57.671] I0814 13:34:57.670984   53048 controller_utils.go:1036] Caches are synced for taint controller
W0814 13:34:57.671] I0814 13:34:57.671085   53048 taint_manager.go:186] Starting NoExecuteTaintManager
W0814 13:34:57.672] I0814 13:34:57.671199   53048 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0814 13:34:57.672] I0814 13:34:57.671476   53048 node_lifecycle_controller.go:1039] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0814 13:34:57.673] I0814 13:34:57.672334   53048 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"7f98e2d1-7bb2-4af0-84a5-fa582ea8572d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
... skipping 4 lines ...
W0814 13:34:57.700] I0814 13:34:57.699597   53048 controller_utils.go:1036] Caches are synced for service account controller
W0814 13:34:57.703] I0814 13:34:57.702810   49595 controller.go:606] quota admission added evaluator for: serviceaccounts
W0814 13:34:57.704] I0814 13:34:57.703405   53048 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 13:34:57.705] I0814 13:34:57.704281   53048 controller_utils.go:1036] Caches are synced for GC controller
W0814 13:34:57.707] I0814 13:34:57.707162   53048 controller_utils.go:1036] Caches are synced for job controller
W0814 13:34:57.713] I0814 13:34:57.712444   53048 controller_utils.go:1036] Caches are synced for HPA controller
W0814 13:34:57.722] E0814 13:34:57.721786   53048 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 13:34:57.727] I0814 13:34:57.726269   53048 controller_utils.go:1036] Caches are synced for PV protection controller
W0814 13:34:57.737] E0814 13:34:57.736088   53048 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 13:34:57.765] I0814 13:34:57.764665   53048 controller_utils.go:1036] Caches are synced for daemon sets controller
W0814 13:34:57.876] I0814 13:34:57.875358   53048 controller_utils.go:1036] Caches are synced for stateful set controller
W0814 13:34:57.878] I0814 13:34:57.877042   53048 controller_utils.go:1036] Caches are synced for expand controller
W0814 13:34:57.879] I0814 13:34:57.877124   53048 controller_utils.go:1036] Caches are synced for PVC protection controller
W0814 13:34:57.881] I0814 13:34:57.880617   53048 controller_utils.go:1036] Caches are synced for attach detach controller
W0814 13:34:57.895] I0814 13:34:57.893882   53048 controller_utils.go:1036] Caches are synced for persistent volume controller
... skipping 89 lines ...
I0814 13:35:02.779] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:35:02.783] +++ command: run_RESTMapper_evaluation_tests
I0814 13:35:02.803] +++ [0814 13:35:02] Creating namespace namespace-1565789702-18053
I0814 13:35:02.918] namespace/namespace-1565789702-18053 created
I0814 13:35:03.023] Context "test" modified.
I0814 13:35:03.034] +++ [0814 13:35:03] Testing RESTMapper
I0814 13:35:03.174] +++ [0814 13:35:03] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 13:35:03.198] +++ exit code: 0
I0814 13:35:03.385] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 13:35:03.385] bindings                                                                      true         Binding
I0814 13:35:03.385] componentstatuses                 cs                                          false        ComponentStatus
I0814 13:35:03.386] configmaps                        cm                                          true         ConfigMap
I0814 13:35:03.386] endpoints                         ep                                          true         Endpoints
... skipping 643 lines ...
I0814 13:35:29.114] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:35:29.354] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:35:29.485] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:35:29.731] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:35:29.874] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:35:30.002] (Bpod "valid-pod" force deleted
W0814 13:35:30.103] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 13:35:30.103] error: setting 'all' parameter but found a non empty selector. 
W0814 13:35:30.104] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 13:35:30.204] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:35:30.285] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0814 13:35:30.403] (Bnamespace/test-kubectl-describe-pod created
I0814 13:35:30.555] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0814 13:35:30.695] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0814 13:35:32.058] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 13:35:32.205] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 13:35:32.315] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 13:35:32.453] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 13:35:32.688] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:35:32.921] (Bpod/env-test-pod created
W0814 13:35:33.022] error: min-available and max-unavailable cannot be both specified
I0814 13:35:33.213] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 13:35:33.215] Name:         env-test-pod
I0814 13:35:33.215] Namespace:    test-kubectl-describe-pod
I0814 13:35:33.215] Priority:     0
I0814 13:35:33.215] Node:         <none>
I0814 13:35:33.216] Labels:       <none>
... skipping 173 lines ...
I0814 13:35:49.816] (Bpod/valid-pod patched
I0814 13:35:49.958] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 13:35:50.068] (Bpod/valid-pod patched
I0814 13:35:50.191] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 13:35:50.428] (Bpod/valid-pod patched
I0814 13:35:50.576] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 13:35:50.825] (B+++ [0814 13:35:50] "kubectl patch with resourceVersion 505" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 13:35:51.148] pod "valid-pod" deleted
I0814 13:35:51.166] pod/valid-pod replaced
I0814 13:35:51.307] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 13:35:51.518] (BSuccessful
I0814 13:35:51.519] message:error: --grace-period must have --force specified
I0814 13:35:51.520] has:\-\-grace-period must have \-\-force specified
I0814 13:35:51.737] Successful
I0814 13:35:51.738] message:error: --timeout must have --force specified
I0814 13:35:51.738] has:\-\-timeout must have \-\-force specified
I0814 13:35:51.942] node/node-v1-test created
W0814 13:35:52.044] W0814 13:35:51.942631   53048 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 13:35:52.157] node/node-v1-test replaced
I0814 13:35:52.295] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 13:35:52.411] (Bnode "node-v1-test" deleted
I0814 13:35:52.557] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 13:35:52.968] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0814 13:35:54.335] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 25 lines ...
I0814 13:35:54.630]     name: kubernetes-pause
I0814 13:35:54.630] has:localonlyvalue
I0814 13:35:54.683] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 13:35:54.919] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 13:35:55.042] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 13:35:55.146] (Bpod/valid-pod labeled
W0814 13:35:55.247] error: 'name' already has a value (valid-pod), and --overwrite is false
I0814 13:35:55.348] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0814 13:35:55.425] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:35:55.529] (Bpod "valid-pod" force deleted
W0814 13:35:55.630] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 13:35:55.731] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:35:55.732] (B+++ [0814 13:35:55] Creating namespace namespace-1565789755-2505
... skipping 82 lines ...
I0814 13:36:05.504] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 13:36:05.508] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:36:05.511] +++ command: run_kubectl_create_error_tests
I0814 13:36:05.535] +++ [0814 13:36:05] Creating namespace namespace-1565789765-8237
I0814 13:36:05.634] namespace/namespace-1565789765-8237 created
I0814 13:36:05.737] Context "test" modified.
I0814 13:36:05.748] +++ [0814 13:36:05] Testing kubectl create with error
W0814 13:36:05.849] Error: must specify one of -f and -k
W0814 13:36:05.849] 
W0814 13:36:05.850] Create a resource from a file or from stdin.
W0814 13:36:05.850] 
W0814 13:36:05.850]  JSON and YAML formats are accepted.
W0814 13:36:05.850] 
W0814 13:36:05.850] Examples:
... skipping 41 lines ...
W0814 13:36:05.855] 
W0814 13:36:05.855] Usage:
W0814 13:36:05.855]   kubectl create -f FILENAME [options]
W0814 13:36:05.855] 
W0814 13:36:05.855] Use "kubectl <command> --help" for more information about a given command.
W0814 13:36:05.856] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 13:36:06.072] +++ [0814 13:36:06] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 13:36:06.173] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 13:36:06.174] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 13:36:06.322] +++ exit code: 0
I0814 13:36:06.376] Recording: run_kubectl_apply_tests
I0814 13:36:06.377] Running command: run_kubectl_apply_tests
I0814 13:36:06.412] 
... skipping 20 lines ...
W0814 13:36:09.288] I0814 13:36:09.287250   49595 client.go:354] scheme "" not registered, fallback to default scheme
W0814 13:36:09.288] I0814 13:36:09.287284   49595 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 13:36:09.288] I0814 13:36:09.287338   49595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 13:36:09.289] I0814 13:36:09.288154   49595 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 13:36:09.293] I0814 13:36:09.292950   49595 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0814 13:36:09.394] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0814 13:36:09.495] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 13:36:09.596] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 13:36:09.597] +++ exit code: 0
I0814 13:36:09.636] Recording: run_kubectl_run_tests
I0814 13:36:09.636] Running command: run_kubectl_run_tests
I0814 13:36:09.672] 
I0814 13:36:09.676] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
I0814 13:36:13.133] Context "test" modified.
I0814 13:36:13.142] +++ [0814 13:36:13] Testing kubectl create filter
I0814 13:36:13.262] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:13.492] (Bpod/selector-test-pod created
I0814 13:36:13.643] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 13:36:13.777] (BSuccessful
I0814 13:36:13.777] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 13:36:13.777] has:pods "selector-test-pod-dont-apply" not found
I0814 13:36:13.906] pod "selector-test-pod" deleted
I0814 13:36:13.941] +++ exit code: 0
I0814 13:36:14.002] Recording: run_kubectl_apply_deployments_tests
I0814 13:36:14.002] Running command: run_kubectl_apply_deployments_tests
I0814 13:36:14.043] 
... skipping 29 lines ...
W0814 13:36:17.042] I0814 13:36:16.948006   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789774-3160", Name:"nginx", UID:"d7702909-4ea8-4a3d-a5ef-33fb7cac956c", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 13:36:17.042] I0814 13:36:16.953389   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-7dbc4d9f", UID:"c36709ca-55ad-4aad-8442-3d8c3820c96e", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-crmfv
W0814 13:36:17.042] I0814 13:36:16.957947   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-7dbc4d9f", UID:"c36709ca-55ad-4aad-8442-3d8c3820c96e", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-8zfcb
W0814 13:36:17.043] I0814 13:36:16.958829   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-7dbc4d9f", UID:"c36709ca-55ad-4aad-8442-3d8c3820c96e", APIVersion:"apps/v1", ResourceVersion:"592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-jlnqf
I0814 13:36:17.144] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 13:36:21.392] (BSuccessful
I0814 13:36:21.392] message:Error from server (Conflict): error when applying patch:
I0814 13:36:21.393] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565789774-3160\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 13:36:21.393] to:
I0814 13:36:21.393] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 13:36:21.394] Name: "nginx", Namespace: "namespace-1565789774-3160"
I0814 13:36:21.396] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565789774-3160\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T13:36:16Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T13:36:16Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T13:36:16Z"]] "name":"nginx" "namespace":"namespace-1565789774-3160" "resourceVersion":"604" "selfLink":"/apis/apps/v1/namespaces/namespace-1565789774-3160/deployments/nginx" "uid":"d7702909-4ea8-4a3d-a5ef-33fb7cac956c"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T13:36:16Z" "lastUpdateTime":"2019-08-14T13:36:16Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T13:36:16Z" "lastUpdateTime":"2019-08-14T13:36:16Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 13:36:21.397] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 13:36:21.397] has:Error from server (Conflict)
W0814 13:36:21.497] I0814 13:36:19.602303   53048 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565789761-17551
I0814 13:36:26.744] deployment.apps/nginx configured
W0814 13:36:26.845] I0814 13:36:26.750755   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789774-3160", Name:"nginx", UID:"ab1d261e-7c52-470e-a30e-4d37d7da1de3", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 13:36:26.846] I0814 13:36:26.759025   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-594f77b9f6", UID:"26a833d6-99da-4646-8588-37a3cfb03a4e", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-c85vc
W0814 13:36:26.846] I0814 13:36:26.768271   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-594f77b9f6", UID:"26a833d6-99da-4646-8588-37a3cfb03a4e", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-hmjhg
W0814 13:36:26.847] I0814 13:36:26.768714   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-594f77b9f6", UID:"26a833d6-99da-4646-8588-37a3cfb03a4e", APIVersion:"apps/v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-tvm4x
I0814 13:36:26.948] Successful
I0814 13:36:26.948] message:        "name": "nginx2"
I0814 13:36:26.949]           "name": "nginx2"
I0814 13:36:26.949] has:"name": "nginx2"
W0814 13:36:31.388] E0814 13:36:31.387677   53048 replica_set.go:450] Sync "namespace-1565789774-3160/nginx-594f77b9f6" failed with replicasets.apps "nginx-594f77b9f6" not found
W0814 13:36:32.212] I0814 13:36:32.211980   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789774-3160", Name:"nginx", UID:"d6bb0664-cd35-41d7-a3ef-f15c18eb5c8f", APIVersion:"apps/v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 13:36:32.219] I0814 13:36:32.218584   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-594f77b9f6", UID:"ab2a8cf5-3a39-4ad0-a144-2cffd06bdbfb", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-nhpwf
W0814 13:36:32.224] I0814 13:36:32.223264   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-594f77b9f6", UID:"ab2a8cf5-3a39-4ad0-a144-2cffd06bdbfb", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-vvvmz
W0814 13:36:32.226] I0814 13:36:32.225899   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789774-3160", Name:"nginx-594f77b9f6", UID:"ab2a8cf5-3a39-4ad0-a144-2cffd06bdbfb", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-mjm6t
I0814 13:36:32.327] Successful
I0814 13:36:32.328] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0814 13:36:34.872] +++ [0814 13:36:34] Creating namespace namespace-1565789794-6492
I0814 13:36:34.970] namespace/namespace-1565789794-6492 created
I0814 13:36:35.059] Context "test" modified.
I0814 13:36:35.070] +++ [0814 13:36:35] Testing kubectl get
I0814 13:36:35.198] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:35.319] (BSuccessful
I0814 13:36:35.320] message:Error from server (NotFound): pods "abc" not found
I0814 13:36:35.320] has:pods "abc" not found
I0814 13:36:35.444] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:35.582] (BSuccessful
I0814 13:36:35.582] message:Error from server (NotFound): pods "abc" not found
I0814 13:36:35.582] has:pods "abc" not found
I0814 13:36:35.710] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:35.823] (BSuccessful
I0814 13:36:35.824] message:{
I0814 13:36:35.824]     "apiVersion": "v1",
I0814 13:36:35.824]     "items": [],
... skipping 23 lines ...
I0814 13:36:36.366] has not:No resources found
I0814 13:36:36.478] Successful
I0814 13:36:36.478] message:NAME
I0814 13:36:36.479] has not:No resources found
I0814 13:36:36.612] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:36.760] (BSuccessful
I0814 13:36:36.761] message:error: the server doesn't have a resource type "foobar"
I0814 13:36:36.761] has not:No resources found
I0814 13:36:36.882] Successful
I0814 13:36:36.883] message:No resources found in namespace-1565789794-6492 namespace.
I0814 13:36:36.883] has:No resources found
I0814 13:36:37.008] Successful
I0814 13:36:37.008] message:
I0814 13:36:37.009] has not:No resources found
I0814 13:36:37.134] Successful
I0814 13:36:37.134] message:No resources found in namespace-1565789794-6492 namespace.
I0814 13:36:37.135] has:No resources found
I0814 13:36:37.263] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:37.376] (BSuccessful
I0814 13:36:37.377] message:Error from server (NotFound): pods "abc" not found
I0814 13:36:37.377] has:pods "abc" not found
I0814 13:36:37.380] FAIL!
I0814 13:36:37.380] message:Error from server (NotFound): pods "abc" not found
I0814 13:36:37.380] has not:List
I0814 13:36:37.380] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 13:36:37.537] Successful
I0814 13:36:37.538] message:I0814 13:36:37.463032   63573 loader.go:375] Config loaded from file:  /tmp/tmp.YrmCHHfNuy/.kube/config
I0814 13:36:37.538] I0814 13:36:37.465256   63573 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 13:36:37.538] I0814 13:36:37.491591   63573 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 13:36:43.481] Successful
I0814 13:36:43.482] message:NAME    DATA   AGE
I0814 13:36:43.482] one     0      0s
I0814 13:36:43.483] three   0      0s
I0814 13:36:43.483] two     0      0s
I0814 13:36:43.483] STATUS    REASON          MESSAGE
I0814 13:36:43.483] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 13:36:43.484] has not:watch is only supported on individual resources
I0814 13:36:44.606] Successful
I0814 13:36:44.607] message:STATUS    REASON          MESSAGE
I0814 13:36:44.607] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 13:36:44.607] has not:watch is only supported on individual resources
I0814 13:36:44.612] +++ [0814 13:36:44] Creating namespace namespace-1565789804-220
I0814 13:36:44.717] namespace/namespace-1565789804-220 created
I0814 13:36:44.816] Context "test" modified.
I0814 13:36:44.957] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:45.169] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 13:36:45.322] }
I0814 13:36:45.433] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:36:45.783] (B<no value>Successful
I0814 13:36:45.785] message:valid-pod:
I0814 13:36:45.786] has:valid-pod:
I0814 13:36:45.902] Successful
I0814 13:36:45.902] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 13:36:45.902] 	template was:
I0814 13:36:45.903] 		{.missing}
I0814 13:36:45.903] 	object given to jsonpath engine was:
I0814 13:36:45.905] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T13:36:45Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T13:36:45Z"}}, "name":"valid-pod", "namespace":"namespace-1565789804-220", "resourceVersion":"704", "selfLink":"/api/v1/namespaces/namespace-1565789804-220/pods/valid-pod", "uid":"034c4a9c-e996-44b0-849f-6b67f062e1d9"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 13:36:45.905] has:missing is not found
W0814 13:36:46.006] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 13:36:46.107] Successful
I0814 13:36:46.107] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 13:36:46.107] 	template was:
I0814 13:36:46.108] 		{{.missing}}
I0814 13:36:46.108] 	raw data was:
I0814 13:36:46.109] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T13:36:45Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T13:36:45Z"}],"name":"valid-pod","namespace":"namespace-1565789804-220","resourceVersion":"704","selfLink":"/api/v1/namespaces/namespace-1565789804-220/pods/valid-pod","uid":"034c4a9c-e996-44b0-849f-6b67f062e1d9"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 13:36:46.110] 	object given to template engine was:
I0814 13:36:46.111] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T13:36:45Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T13:36:45Z]] name:valid-pod namespace:namespace-1565789804-220 resourceVersion:704 selfLink:/api/v1/namespaces/namespace-1565789804-220/pods/valid-pod uid:034c4a9c-e996-44b0-849f-6b67f062e1d9] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 13:36:46.112] has:map has no entry for key "missing"
I0814 13:36:47.143] Successful
I0814 13:36:47.144] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 13:36:47.145] valid-pod   0/1     Pending   0          1s
I0814 13:36:47.145] STATUS      REASON          MESSAGE
I0814 13:36:47.146] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 13:36:47.146] has:STATUS
I0814 13:36:47.146] Successful
I0814 13:36:47.147] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 13:36:47.147] valid-pod   0/1     Pending   0          1s
I0814 13:36:47.148] STATUS      REASON          MESSAGE
I0814 13:36:47.148] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 13:36:47.148] has:valid-pod
I0814 13:36:48.268] Successful
I0814 13:36:48.269] message:pod/valid-pod
I0814 13:36:48.269] has not:STATUS
I0814 13:36:48.272] Successful
I0814 13:36:48.272] message:pod/valid-pod
... skipping 144 lines ...
I0814 13:36:49.402] status:
I0814 13:36:49.402]   phase: Pending
I0814 13:36:49.402]   qosClass: Guaranteed
I0814 13:36:49.402] ---
I0814 13:36:49.402] has:name: valid-pod
I0814 13:36:49.500] Successful
I0814 13:36:49.501] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 13:36:49.501] has:"invalid-pod" not found
I0814 13:36:49.607] pod "valid-pod" deleted
I0814 13:36:49.735] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:36:49.940] (Bpod/redis-master created
I0814 13:36:49.946] pod/valid-pod created
I0814 13:36:50.075] Successful
... skipping 35 lines ...
I0814 13:36:51.661] +++ command: run_kubectl_exec_pod_tests
I0814 13:36:51.680] +++ [0814 13:36:51] Creating namespace namespace-1565789811-15379
I0814 13:36:51.786] namespace/namespace-1565789811-15379 created
I0814 13:36:51.899] Context "test" modified.
I0814 13:36:51.910] +++ [0814 13:36:51] Testing kubectl exec POD COMMAND
I0814 13:36:52.036] Successful
I0814 13:36:52.036] message:Error from server (NotFound): pods "abc" not found
I0814 13:36:52.036] has:pods "abc" not found
I0814 13:36:52.251] pod/test-pod created
I0814 13:36:52.396] Successful
I0814 13:36:52.396] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 13:36:52.396] has not:pods "test-pod" not found
I0814 13:36:52.400] Successful
I0814 13:36:52.400] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 13:36:52.401] has not:pod or type/name must be specified
I0814 13:36:52.509] pod "test-pod" deleted
I0814 13:36:52.537] +++ exit code: 0
I0814 13:36:52.586] Recording: run_kubectl_exec_resource_name_tests
I0814 13:36:52.586] Running command: run_kubectl_exec_resource_name_tests
I0814 13:36:52.623] 
... skipping 2 lines ...
I0814 13:36:52.633] +++ command: run_kubectl_exec_resource_name_tests
I0814 13:36:52.651] +++ [0814 13:36:52] Creating namespace namespace-1565789812-27266
I0814 13:36:52.762] namespace/namespace-1565789812-27266 created
I0814 13:36:52.874] Context "test" modified.
I0814 13:36:52.885] +++ [0814 13:36:52] Testing kubectl exec TYPE/NAME COMMAND
I0814 13:36:53.025] Successful
I0814 13:36:53.025] message:error: the server doesn't have a resource type "foo"
I0814 13:36:53.026] has:error:
I0814 13:36:53.147] Successful
I0814 13:36:53.148] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 13:36:53.148] has:"bar" not found
I0814 13:36:53.369] pod/test-pod created
I0814 13:36:53.600] replicaset.apps/frontend created
W0814 13:36:53.701] I0814 13:36:53.606864   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789812-27266", Name:"frontend", UID:"6fda2ad4-b1cd-45ab-9d03-5c4a77dd62df", APIVersion:"apps/v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4qbqj
W0814 13:36:53.702] I0814 13:36:53.612090   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789812-27266", Name:"frontend", UID:"6fda2ad4-b1cd-45ab-9d03-5c4a77dd62df", APIVersion:"apps/v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4q2qq
W0814 13:36:53.702] I0814 13:36:53.612822   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789812-27266", Name:"frontend", UID:"6fda2ad4-b1cd-45ab-9d03-5c4a77dd62df", APIVersion:"apps/v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tksnq
I0814 13:36:53.822] configmap/test-set-env-config created
I0814 13:36:53.952] Successful
I0814 13:36:53.952] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 13:36:53.952] has:not implemented
I0814 13:36:54.082] Successful
I0814 13:36:54.083] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 13:36:54.083] has not:not found
I0814 13:36:54.086] Successful
I0814 13:36:54.086] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 13:36:54.086] has not:pod or type/name must be specified
I0814 13:36:54.224] Successful
I0814 13:36:54.225] message:Error from server (BadRequest): pod frontend-4q2qq does not have a host assigned
I0814 13:36:54.225] has not:not found
I0814 13:36:54.230] Successful
I0814 13:36:54.230] message:Error from server (BadRequest): pod frontend-4q2qq does not have a host assigned
I0814 13:36:54.230] has not:pod or type/name must be specified
I0814 13:36:54.346] pod "test-pod" deleted
I0814 13:36:54.472] replicaset.apps "frontend" deleted
I0814 13:36:54.585] configmap "test-set-env-config" deleted
I0814 13:36:54.618] +++ exit code: 0
I0814 13:36:54.677] Recording: run_create_secret_tests
I0814 13:36:54.678] Running command: run_create_secret_tests
I0814 13:36:54.715] 
I0814 13:36:54.719] +++ Running case: test-cmd.run_create_secret_tests 
I0814 13:36:54.723] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:36:54.727] +++ command: run_create_secret_tests
I0814 13:36:54.857] Successful
I0814 13:36:54.858] message:Error from server (NotFound): secrets "mysecret" not found
I0814 13:36:54.858] has:secrets "mysecret" not found
I0814 13:36:55.096] Successful
I0814 13:36:55.096] message:Error from server (NotFound): secrets "mysecret" not found
I0814 13:36:55.096] has:secrets "mysecret" not found
I0814 13:36:55.099] Successful
I0814 13:36:55.100] message:user-specified
I0814 13:36:55.100] has:user-specified
I0814 13:36:55.199] Successful
I0814 13:36:55.309] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"bf017fb3-8c85-4eb8-93c2-ff3896c9b845","resourceVersion":"779","creationTimestamp":"2019-08-14T13:36:55Z"}}
... skipping 2 lines ...
I0814 13:36:55.548] has:uid
I0814 13:36:55.664] Successful
I0814 13:36:55.665] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"bf017fb3-8c85-4eb8-93c2-ff3896c9b845","resourceVersion":"781","creationTimestamp":"2019-08-14T13:36:55Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T13:36:55Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 13:36:55.666] has:config1
I0814 13:36:55.774] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"bf017fb3-8c85-4eb8-93c2-ff3896c9b845"}}
I0814 13:36:55.912] Successful
I0814 13:36:55.912] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 13:36:55.912] has:configmaps "tester-update-cm" not found
I0814 13:36:55.932] +++ exit code: 0
I0814 13:36:55.993] Recording: run_kubectl_create_kustomization_directory_tests
I0814 13:36:55.993] Running command: run_kubectl_create_kustomization_directory_tests
I0814 13:36:56.030] 
I0814 13:36:56.034] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0814 13:36:59.986] valid-pod   0/1     Pending   0          0s
I0814 13:36:59.986] has:valid-pod
I0814 13:37:01.114] Successful
I0814 13:37:01.115] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 13:37:01.115] valid-pod   0/1     Pending   0          1s
I0814 13:37:01.115] STATUS      REASON          MESSAGE
I0814 13:37:01.115] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 13:37:01.115] has:Timeout exceeded while reading body
I0814 13:37:01.240] Successful
I0814 13:37:01.241] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 13:37:01.241] valid-pod   0/1     Pending   0          2s
I0814 13:37:01.241] has:valid-pod
I0814 13:37:01.347] Successful
I0814 13:37:01.348] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 13:37:01.348] has:Invalid timeout value
I0814 13:37:01.466] pod "valid-pod" deleted
I0814 13:37:01.501] +++ exit code: 0
I0814 13:37:01.567] Recording: run_crd_tests
I0814 13:37:01.568] Running command: run_crd_tests
I0814 13:37:01.609] 
... skipping 245 lines ...
I0814 13:37:07.534] foo.company.com/test patched
I0814 13:37:07.650] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 13:37:07.749] (Bfoo.company.com/test patched
I0814 13:37:07.864] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 13:37:07.969] (Bfoo.company.com/test patched
I0814 13:37:08.085] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 13:37:08.267] (B+++ [0814 13:37:08] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 13:37:08.348] {
I0814 13:37:08.348]     "apiVersion": "company.com/v1",
I0814 13:37:08.349]     "kind": "Foo",
I0814 13:37:08.349]     "metadata": {
I0814 13:37:08.349]         "annotations": {
I0814 13:37:08.349]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 353 lines ...
I0814 13:37:38.903] (Bnamespace/non-native-resources created
I0814 13:37:39.139] bar.company.com/test created
I0814 13:37:39.277] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 13:37:39.387] (Bnamespace "non-native-resources" deleted
I0814 13:37:44.761] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 13:37:45.013] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0814 13:37:45.114] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 13:37:45.215] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0814 13:37:45.334] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 13:37:45.518] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 13:37:45.563] +++ exit code: 0
I0814 13:37:45.624] Recording: run_cmd_with_img_tests
I0814 13:37:45.625] Running command: run_cmd_with_img_tests
... skipping 4 lines ...
I0814 13:37:45.696] +++ [0814 13:37:45] Creating namespace namespace-1565789865-23484
I0814 13:37:45.800] namespace/namespace-1565789865-23484 created
I0814 13:37:45.932] Context "test" modified.
I0814 13:37:45.943] +++ [0814 13:37:45] Testing cmd with image
W0814 13:37:46.045] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 13:37:46.045] W0814 13:37:46.041974   49595 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 13:37:46.046] E0814 13:37:46.044031   53048 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:46.065] I0814 13:37:46.064909   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789865-23484", Name:"test1", UID:"cbfd4e1d-6239-49fb-aa86-4869a70b85f5", APIVersion:"apps/v1", ResourceVersion:"942", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 13:37:46.075] I0814 13:37:46.074234   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789865-23484", Name:"test1-9797f89d8", UID:"5f994d87-88a4-4576-8630-e6e804632865", APIVersion:"apps/v1", ResourceVersion:"943", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-nlh4z
I0814 13:37:46.177] Successful
I0814 13:37:46.178] message:deployment.apps/test1 created
I0814 13:37:46.178] has:deployment.apps/test1 created
I0814 13:37:46.191] deployment.apps "test1" deleted
W0814 13:37:46.292] W0814 13:37:46.185876   49595 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 13:37:46.293] E0814 13:37:46.188373   53048 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:46.354] W0814 13:37:46.353947   49595 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 13:37:46.356] E0814 13:37:46.356308   53048 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:46.457] Successful
I0814 13:37:46.458] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 13:37:46.458] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 13:37:46.458] +++ exit code: 0
I0814 13:37:46.459] +++ [0814 13:37:46] Testing recursive resources
I0814 13:37:46.459] +++ [0814 13:37:46] Creating namespace namespace-1565789866-24665
I0814 13:37:46.510] namespace/namespace-1565789866-24665 created
W0814 13:37:46.611] W0814 13:37:46.531692   49595 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 13:37:46.612] E0814 13:37:46.534078   53048 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:46.713] Context "test" modified.
I0814 13:37:46.749] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:47.174] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:47.179] (BSuccessful
I0814 13:37:47.180] message:pod/busybox0 created
I0814 13:37:47.181] pod/busybox1 created
I0814 13:37:47.181] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 13:37:47.181] has:error validating data: kind not set
W0814 13:37:47.282] E0814 13:37:47.046572   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:47.283] E0814 13:37:47.191058   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:47.359] E0814 13:37:47.358714   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:47.460] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:47.615] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 13:37:47.618] (BSuccessful
I0814 13:37:47.618] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:47.619] has:Object 'Kind' is missing
W0814 13:37:47.720] E0814 13:37:47.536434   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:47.821] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:48.184] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 13:37:48.188] (BSuccessful
I0814 13:37:48.189] message:pod/busybox0 replaced
I0814 13:37:48.189] pod/busybox1 replaced
I0814 13:37:48.189] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 13:37:48.189] has:error validating data: kind not set
W0814 13:37:48.290] E0814 13:37:48.048258   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:48.291] E0814 13:37:48.193304   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:48.362] E0814 13:37:48.361397   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:48.463] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:48.476] (BSuccessful
I0814 13:37:48.476] message:Name:         busybox0
I0814 13:37:48.476] Namespace:    namespace-1565789866-24665
I0814 13:37:48.476] Priority:     0
I0814 13:37:48.477] Node:         <none>
... skipping 154 lines ...
I0814 13:37:48.494] QoS Class:        BestEffort
I0814 13:37:48.495] Node-Selectors:   <none>
I0814 13:37:48.495] Tolerations:      <none>
I0814 13:37:48.495] Events:           <none>
I0814 13:37:48.495] unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:48.495] has:Object 'Kind' is missing
W0814 13:37:48.596] E0814 13:37:48.538701   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:48.697] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:48.884] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 13:37:48.887] (BSuccessful
I0814 13:37:48.887] message:pod/busybox0 annotated
I0814 13:37:48.888] pod/busybox1 annotated
I0814 13:37:48.888] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:48.888] has:Object 'Kind' is missing
I0814 13:37:49.027] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:49.428] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 13:37:49.432] (BSuccessful
I0814 13:37:49.432] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 13:37:49.433] pod/busybox0 configured
I0814 13:37:49.433] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 13:37:49.433] pod/busybox1 configured
I0814 13:37:49.434] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 13:37:49.434] has:error validating data: kind not set
W0814 13:37:49.535] E0814 13:37:49.050481   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:49.535] E0814 13:37:49.195703   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:49.535] E0814 13:37:49.364436   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:49.543] E0814 13:37:49.541790   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:49.643] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:49.788] (Bdeployment.apps/nginx created
W0814 13:37:49.889] I0814 13:37:49.794114   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789866-24665", Name:"nginx", UID:"9ce8f84e-f137-49ae-8fa0-ac964e3d5bee", APIVersion:"apps/v1", ResourceVersion:"968", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 13:37:49.890] I0814 13:37:49.799424   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx-bbbbb95b5", UID:"d0c3a9dc-63c4-430e-8c0d-d5f2666318f3", APIVersion:"apps/v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-4mnm4
W0814 13:37:49.891] I0814 13:37:49.804481   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx-bbbbb95b5", UID:"d0c3a9dc-63c4-430e-8c0d-d5f2666318f3", APIVersion:"apps/v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-8g9ct
W0814 13:37:49.891] I0814 13:37:49.804990   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx-bbbbb95b5", UID:"d0c3a9dc-63c4-430e-8c0d-d5f2666318f3", APIVersion:"apps/v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-fw544
... skipping 40 lines ...
I0814 13:37:50.347]       restartPolicy: Always
I0814 13:37:50.347]       schedulerName: default-scheduler
I0814 13:37:50.347]       securityContext: {}
I0814 13:37:50.347]       terminationGracePeriodSeconds: 30
I0814 13:37:50.347] status: {}
I0814 13:37:50.347] has:extensions/v1beta1
W0814 13:37:50.448] E0814 13:37:50.052326   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:50.449] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 13:37:50.449] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 13:37:50.450] E0814 13:37:50.198375   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:50.450] E0814 13:37:50.366052   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:50.545] E0814 13:37:50.544758   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:50.646] deployment.apps "nginx" deleted
I0814 13:37:50.647] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:50.854] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:50.857] (BSuccessful
I0814 13:37:50.857] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 13:37:50.858] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 13:37:50.858] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:50.858] has:Object 'Kind' is missing
I0814 13:37:50.991] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:51.117] (BSuccessful
I0814 13:37:51.118] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:51.118] has:busybox0:busybox1:
I0814 13:37:51.121] Successful
I0814 13:37:51.122] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:51.122] has:Object 'Kind' is missing
W0814 13:37:51.222] E0814 13:37:51.054247   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:51.224] E0814 13:37:51.201138   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:51.325] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:51.384] (Bpod/busybox0 labeled
I0814 13:37:51.385] pod/busybox1 labeled
I0814 13:37:51.385] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0814 13:37:51.486] E0814 13:37:51.368832   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:51.547] E0814 13:37:51.546740   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:51.648] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 13:37:51.649] (BSuccessful
I0814 13:37:51.649] message:pod/busybox0 labeled
I0814 13:37:51.649] pod/busybox1 labeled
I0814 13:37:51.649] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:51.650] has:Object 'Kind' is missing
I0814 13:37:51.678] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:51.817] (Bpod/busybox0 patched
I0814 13:37:51.818] pod/busybox1 patched
I0814 13:37:51.818] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:51.957] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 13:37:51.960] (BSuccessful
I0814 13:37:51.961] message:pod/busybox0 patched
I0814 13:37:51.961] pod/busybox1 patched
I0814 13:37:51.961] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:51.961] has:Object 'Kind' is missing
W0814 13:37:52.062] E0814 13:37:52.056925   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:52.163] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:52.396] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:52.400] (BSuccessful
I0814 13:37:52.401] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 13:37:52.401] pod "busybox0" force deleted
I0814 13:37:52.401] pod "busybox1" force deleted
I0814 13:37:52.402] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 13:37:52.402] has:Object 'Kind' is missing
W0814 13:37:52.503] E0814 13:37:52.203584   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:52.503] E0814 13:37:52.370572   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:52.550] E0814 13:37:52.549135   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:52.651] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:52.767] (Breplicationcontroller/busybox0 created
I0814 13:37:52.775] replicationcontroller/busybox1 created
W0814 13:37:52.876] I0814 13:37:52.773697   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565789866-24665", Name:"busybox0", UID:"e92d91db-f514-49b8-b384-1ff848dc3556", APIVersion:"v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-b6shf
W0814 13:37:52.876] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 13:37:52.877] I0814 13:37:52.780138   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565789866-24665", Name:"busybox1", UID:"fdba2704-a538-4a1f-bb41-e4009eab46d7", APIVersion:"v1", ResourceVersion:"1000", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-9f96b
I0814 13:37:52.977] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:53.049] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:53.179] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 13:37:53.321] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 13:37:53.573] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 13:37:53.699] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 13:37:53.701] (BSuccessful
I0814 13:37:53.702] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 13:37:53.702] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 13:37:53.702] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:53.702] has:Object 'Kind' is missing
W0814 13:37:53.803] E0814 13:37:53.059849   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:53.803] E0814 13:37:53.205375   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:53.803] E0814 13:37:53.373744   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:53.804] E0814 13:37:53.551154   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:53.905] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 13:37:53.926] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 13:37:54.060] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:54.187] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 13:37:54.317] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 13:37:54.594] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 13:37:54.726] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 13:37:54.728] (BSuccessful
I0814 13:37:54.729] message:service/busybox0 exposed
I0814 13:37:54.729] service/busybox1 exposed
I0814 13:37:54.730] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:54.730] has:Object 'Kind' is missing
W0814 13:37:54.830] E0814 13:37:54.062699   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:54.831] E0814 13:37:54.207459   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:54.832] E0814 13:37:54.375980   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:54.832] E0814 13:37:54.552914   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:54.933] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:54.991] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 13:37:55.126] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 13:37:55.421] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 13:37:55.566] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 13:37:55.570] (BSuccessful
I0814 13:37:55.571] message:replicationcontroller/busybox0 scaled
I0814 13:37:55.571] replicationcontroller/busybox1 scaled
I0814 13:37:55.572] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:55.573] has:Object 'Kind' is missing
W0814 13:37:55.675] E0814 13:37:55.065071   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:55.676] E0814 13:37:55.209830   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:55.676] I0814 13:37:55.257428   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565789866-24665", Name:"busybox0", UID:"e92d91db-f514-49b8-b384-1ff848dc3556", APIVersion:"v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-vfm9l
W0814 13:37:55.677] I0814 13:37:55.278946   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565789866-24665", Name:"busybox1", UID:"fdba2704-a538-4a1f-bb41-e4009eab46d7", APIVersion:"v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-7cr49
W0814 13:37:55.677] E0814 13:37:55.379132   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:55.678] E0814 13:37:55.555070   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:55.778] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:55.962] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:55.967] (BSuccessful
I0814 13:37:55.968] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 13:37:55.968] replicationcontroller "busybox0" force deleted
I0814 13:37:55.968] replicationcontroller "busybox1" force deleted
I0814 13:37:55.969] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:55.969] has:Object 'Kind' is missing
W0814 13:37:56.070] E0814 13:37:56.066550   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:56.171] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:56.330] (Bdeployment.apps/nginx1-deployment created
I0814 13:37:56.337] deployment.apps/nginx0-deployment created
W0814 13:37:56.438] E0814 13:37:56.211844   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:56.438] I0814 13:37:56.336719   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789866-24665", Name:"nginx1-deployment", UID:"9f71391d-8f5b-4faa-81fc-9fee4ae5a1d5", APIVersion:"apps/v1", ResourceVersion:"1042", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 13:37:56.439] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 13:37:56.439] I0814 13:37:56.342151   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx1-deployment-84f7f49fb7", UID:"84aaf352-38c7-497f-940d-098060998fff", APIVersion:"apps/v1", ResourceVersion:"1043", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-lzqgg
W0814 13:37:56.440] I0814 13:37:56.343031   53048 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565789866-24665", Name:"nginx0-deployment", UID:"46676ce9-cc8a-47db-a399-bf2dd5f0a16e", APIVersion:"apps/v1", ResourceVersion:"1044", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 13:37:56.440] I0814 13:37:56.346315   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx1-deployment-84f7f49fb7", UID:"84aaf352-38c7-497f-940d-098060998fff", APIVersion:"apps/v1", ResourceVersion:"1043", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-r97fr
W0814 13:37:56.440] I0814 13:37:56.350874   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx0-deployment-57475bf54d", UID:"fdd0b196-4d7b-4b1b-95e0-4d463d95fc57", APIVersion:"apps/v1", ResourceVersion:"1046", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-dbtp2
W0814 13:37:56.441] I0814 13:37:56.356951   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565789866-24665", Name:"nginx0-deployment-57475bf54d", UID:"fdd0b196-4d7b-4b1b-95e0-4d463d95fc57", APIVersion:"apps/v1", ResourceVersion:"1046", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-fwvqj
W0814 13:37:56.441] E0814 13:37:56.382688   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:56.542] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 13:37:56.638] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 13:37:56.956] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 13:37:56.959] (BSuccessful
I0814 13:37:56.959] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 13:37:56.960] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 13:37:56.960] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 13:37:56.960] has:Object 'Kind' is missing
W0814 13:37:57.061] E0814 13:37:56.557955   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:57.069] E0814 13:37:57.068411   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:57.170] deployment.apps/nginx1-deployment paused
I0814 13:37:57.171] deployment.apps/nginx0-deployment paused
I0814 13:37:57.251] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 13:37:57.254] (BSuccessful
I0814 13:37:57.255] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 13:37:57.256] has:Object 'Kind' is missing
W0814 13:37:57.356] E0814 13:37:57.214489   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:57.385] E0814 13:37:57.384943   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:57.487] deployment.apps/nginx1-deployment resumed
I0814 13:37:57.487] deployment.apps/nginx0-deployment resumed
I0814 13:37:57.569] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0814 13:37:57.573] (BSuccessful
I0814 13:37:57.574] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 13:37:57.574] has:Object 'Kind' is missing
W0814 13:37:57.675] E0814 13:37:57.560418   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:57.776] Successful
I0814 13:37:57.777] message:deployment.apps/nginx1-deployment 
I0814 13:37:57.777] REVISION  CHANGE-CAUSE
I0814 13:37:57.777] 1         <none>
I0814 13:37:57.778] 
I0814 13:37:57.778] deployment.apps/nginx0-deployment 
I0814 13:37:57.778] REVISION  CHANGE-CAUSE
I0814 13:37:57.778] 1         <none>
I0814 13:37:57.778] 
I0814 13:37:57.779] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 13:37:57.780] has:nginx0-deployment
I0814 13:37:57.780] Successful
I0814 13:37:57.780] message:deployment.apps/nginx1-deployment 
I0814 13:37:57.781] REVISION  CHANGE-CAUSE
I0814 13:37:57.781] 1         <none>
I0814 13:37:57.781] 
I0814 13:37:57.781] deployment.apps/nginx0-deployment 
I0814 13:37:57.782] REVISION  CHANGE-CAUSE
I0814 13:37:57.782] 1         <none>
I0814 13:37:57.782] 
I0814 13:37:57.783] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 13:37:57.783] has:nginx1-deployment
I0814 13:37:57.783] Successful
I0814 13:37:57.783] message:deployment.apps/nginx1-deployment 
I0814 13:37:57.784] REVISION  CHANGE-CAUSE
I0814 13:37:57.784] 1         <none>
I0814 13:37:57.784] 
I0814 13:37:57.784] deployment.apps/nginx0-deployment 
I0814 13:37:57.784] REVISION  CHANGE-CAUSE
I0814 13:37:57.785] 1         <none>
I0814 13:37:57.785] 
I0814 13:37:57.785] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 13:37:57.786] has:Object 'Kind' is missing
I0814 13:37:57.870] deployment.apps "nginx1-deployment" force deleted
I0814 13:37:57.878] deployment.apps "nginx0-deployment" force deleted
W0814 13:37:57.980] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 13:37:57.981] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 13:37:58.074] E0814 13:37:58.073064   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:58.219] E0814 13:37:58.217914   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:58.390] E0814 13:37:58.388988   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:58.563] E0814 13:37:58.562832   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:59.036] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:37:59.270] (Breplicationcontroller/busybox0 created
I0814 13:37:59.277] replicationcontroller/busybox1 created
W0814 13:37:59.378] E0814 13:37:59.075682   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:59.379] E0814 13:37:59.219957   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:37:59.380] I0814 13:37:59.275432   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565789866-24665", Name:"busybox0", UID:"73175f7c-bd20-4b8d-9775-609f03497605", APIVersion:"v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9zg6b
W0814 13:37:59.381] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 13:37:59.381] I0814 13:37:59.283525   53048 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565789866-24665", Name:"busybox1", UID:"010fa316-8be8-4307-9084-acbd94319f6b", APIVersion:"v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-q5brv
W0814 13:37:59.392] E0814 13:37:59.391187   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:59.493] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 13:37:59.603] (BSuccessful
I0814 13:37:59.604] message:no rollbacker has been implemented for "ReplicationController"
I0814 13:37:59.605] no rollbacker has been implemented for "ReplicationController"
I0814 13:37:59.605] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.605] has:no rollbacker has been implemented for "ReplicationController"
I0814 13:37:59.606] Successful
I0814 13:37:59.606] message:no rollbacker has been implemented for "ReplicationController"
I0814 13:37:59.607] no rollbacker has been implemented for "ReplicationController"
I0814 13:37:59.607] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.607] has:Object 'Kind' is missing
W0814 13:37:59.708] E0814 13:37:59.565799   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:37:59.809] Successful
I0814 13:37:59.810] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.810] error: replicationcontrollers "busybox0" pausing is not supported
I0814 13:37:59.811] error: replicationcontrollers "busybox1" pausing is not supported
I0814 13:37:59.811] has:Object 'Kind' is missing
I0814 13:37:59.811] Successful
I0814 13:37:59.812] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.812] error: replicationcontrollers "busybox0" pausing is not supported
I0814 13:37:59.813] error: replicationcontrollers "busybox1" pausing is not supported
I0814 13:37:59.813] has:replicationcontrollers "busybox0" pausing is not supported
I0814 13:37:59.813] Successful
I0814 13:37:59.814] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.814] error: replicationcontrollers "busybox0" pausing is not supported
I0814 13:37:59.814] error: replicationcontrollers "busybox1" pausing is not supported
I0814 13:37:59.814] has:replicationcontrollers "busybox1" pausing is not supported
I0814 13:37:59.904] Successful
I0814 13:37:59.905] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.906] error: replicationcontrollers "busybox0" resuming is not supported
I0814 13:37:59.906] error: replicationcontrollers "busybox1" resuming is not supported
I0814 13:37:59.906] has:Object 'Kind' is missing
I0814 13:37:59.909] Successful
I0814 13:37:59.910] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.910] error: replicationcontrollers "busybox0" resuming is not supported
I0814 13:37:59.910] error: replicationcontrollers "busybox1" resuming is not supported
I0814 13:37:59.910] has:replicationcontrollers "busybox0" resuming is not supported
I0814 13:37:59.914] Successful
I0814 13:37:59.915] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 13:37:59.915] error: replicationcontrollers "busybox0" resuming is not supported
I0814 13:37:59.915] error: replicationcontrollers "busybox1" resuming is not supported
I0814 13:37:59.915] has:replicationcontrollers "busybox0" resuming is not supported
W0814 13:38:00.036] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 13:38:00.062] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0814 13:38:00.078] E0814 13:38:00.077917   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:00.133] I0814 13:38:00.131872   53048 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 13:38:00.224] E0814 13:38:00.223320   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:00.237] I0814 13:38:00.235063   53048 controller_utils.go:1036] Caches are synced for resource quota controller
I0814 13:38:00.338] replicationcontroller "busybox0" force deleted
I0814 13:38:00.339] replicationcontroller "busybox1" force deleted
W0814 13:38:00.439] E0814 13:38:00.394034   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:00.571] E0814 13:38:00.569759   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:00.579] I0814 13:38:00.577828   53048 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 13:38:00.682] I0814 13:38:00.681084   53048 controller_utils.go:1036] Caches are synced for garbage collector controller
I0814 13:38:01.074] Recording: run_namespace_tests
I0814 13:38:01.075] Running command: run_namespace_tests
I0814 13:38:01.115] 
I0814 13:38:01.118] +++ Running case: test-cmd.run_namespace_tests 
I0814 13:38:01.124] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:01.128] +++ command: run_namespace_tests
I0814 13:38:01.145] +++ [0814 13:38:01] Testing kubectl(v1:namespaces)
W0814 13:38:01.246] E0814 13:38:01.081547   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:01.247] E0814 13:38:01.225244   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:01.348] namespace/my-namespace created
I0814 13:38:01.437] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 13:38:01.545] (Bnamespace "my-namespace" deleted
W0814 13:38:01.646] E0814 13:38:01.398709   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:01.647] E0814 13:38:01.572237   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:02.086] E0814 13:38:02.085417   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:02.228] E0814 13:38:02.227522   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:02.403] E0814 13:38:02.401907   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:02.575] E0814 13:38:02.574243   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:03.091] E0814 13:38:03.090043   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:03.231] E0814 13:38:03.230563   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:03.406] E0814 13:38:03.405609   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:03.580] E0814 13:38:03.577876   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:04.092] E0814 13:38:04.091719   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:04.234] E0814 13:38:04.233866   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:04.409] E0814 13:38:04.408516   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:04.599] E0814 13:38:04.580036   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:05.096] E0814 13:38:05.095383   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:05.238] E0814 13:38:05.237516   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:05.412] E0814 13:38:05.410878   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:05.584] E0814 13:38:05.583106   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:06.100] E0814 13:38:06.099679   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:06.240] E0814 13:38:06.239398   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:06.413] E0814 13:38:06.412967   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:06.585] E0814 13:38:06.584183   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:06.719] namespace/my-namespace condition met
I0814 13:38:06.858] Successful
I0814 13:38:06.859] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 13:38:06.859] has: not found
I0814 13:38:06.969] namespace/my-namespace created
W0814 13:38:07.102] E0814 13:38:07.101569   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:07.203] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 13:38:07.502] (BSuccessful
I0814 13:38:07.503] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 13:38:07.504] namespace "kube-node-lease" deleted
I0814 13:38:07.504] namespace "my-namespace" deleted
I0814 13:38:07.504] namespace "namespace-1565789699-19450" deleted
... skipping 27 lines ...
I0814 13:38:07.507] namespace "namespace-1565789817-19630" deleted
I0814 13:38:07.507] namespace "namespace-1565789819-5031" deleted
I0814 13:38:07.507] namespace "namespace-1565789821-8543" deleted
I0814 13:38:07.507] namespace "namespace-1565789823-7558" deleted
I0814 13:38:07.508] namespace "namespace-1565789865-23484" deleted
I0814 13:38:07.508] namespace "namespace-1565789866-24665" deleted
I0814 13:38:07.508] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 13:38:07.508] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 13:38:07.508] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 13:38:07.508] has:warning: deleting cluster-scoped resources
I0814 13:38:07.508] Successful
I0814 13:38:07.509] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 13:38:07.509] namespace "kube-node-lease" deleted
I0814 13:38:07.509] namespace "my-namespace" deleted
I0814 13:38:07.509] namespace "namespace-1565789699-19450" deleted
... skipping 27 lines ...
I0814 13:38:07.512] namespace "namespace-1565789817-19630" deleted
I0814 13:38:07.512] namespace "namespace-1565789819-5031" deleted
I0814 13:38:07.512] namespace "namespace-1565789821-8543" deleted
I0814 13:38:07.512] namespace "namespace-1565789823-7558" deleted
I0814 13:38:07.512] namespace "namespace-1565789865-23484" deleted
I0814 13:38:07.512] namespace "namespace-1565789866-24665" deleted
I0814 13:38:07.513] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 13:38:07.513] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 13:38:07.513] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 13:38:07.513] has:namespace "my-namespace" deleted
W0814 13:38:07.613] E0814 13:38:07.241797   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:07.614] E0814 13:38:07.414849   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:07.615] E0814 13:38:07.588176   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:07.716] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 13:38:07.777] (Bnamespace/other created
I0814 13:38:07.918] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 13:38:08.061] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:08.298] (Bpod/valid-pod created
W0814 13:38:08.399] E0814 13:38:08.104838   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:08.400] E0814 13:38:08.244913   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:08.417] E0814 13:38:08.416768   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:08.425] I0814 13:38:08.424344   53048 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565789866-24665
W0814 13:38:08.432] I0814 13:38:08.431539   53048 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565789866-24665
I0814 13:38:08.533] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:38:08.606] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:38:08.742] (BSuccessful
I0814 13:38:08.743] message:error: a resource cannot be retrieved by name across all namespaces
I0814 13:38:08.744] has:a resource cannot be retrieved by name across all namespaces
W0814 13:38:08.845] E0814 13:38:08.590564   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:08.946] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 13:38:08.985] (Bpod "valid-pod" force deleted
W0814 13:38:09.086] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 13:38:09.107] E0814 13:38:09.106564   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:09.208] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:09.228] (Bnamespace "other" deleted
W0814 13:38:09.330] E0814 13:38:09.247732   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:09.422] E0814 13:38:09.420781   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:09.594] E0814 13:38:09.593019   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:10.109] E0814 13:38:10.108702   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:10.250] E0814 13:38:10.249898   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:10.425] E0814 13:38:10.423847   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:10.596] E0814 13:38:10.595380   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:11.112] E0814 13:38:11.111796   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:11.252] E0814 13:38:11.251591   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:11.429] E0814 13:38:11.427444   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:11.598] E0814 13:38:11.597892   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:12.116] E0814 13:38:12.114431   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:12.254] E0814 13:38:12.253317   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:12.429] E0814 13:38:12.428535   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:12.599] E0814 13:38:12.598807   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:13.117] E0814 13:38:13.115993   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:13.255] E0814 13:38:13.254488   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:13.431] E0814 13:38:13.430460   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:13.601] E0814 13:38:13.600317   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:14.120] E0814 13:38:14.119053   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:14.257] E0814 13:38:14.256027   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:14.411] +++ exit code: 0
I0814 13:38:14.470] Recording: run_secrets_test
I0814 13:38:14.470] Running command: run_secrets_test
I0814 13:38:14.496] 
I0814 13:38:14.499] +++ Running case: test-cmd.run_secrets_test 
I0814 13:38:14.502] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 35 lines ...
I0814 13:38:14.804]   key1: dmFsdWUx
I0814 13:38:14.804] kind: Secret
I0814 13:38:14.804] metadata:
I0814 13:38:14.804]   creationTimestamp: null
I0814 13:38:14.804]   name: test
I0814 13:38:14.805] has not:example.com
W0814 13:38:14.905] E0814 13:38:14.432653   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:14.906] E0814 13:38:14.601737   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:14.906] I0814 13:38:14.786278   69986 loader.go:375] Config loaded from file:  /tmp/tmp.YrmCHHfNuy/.kube/config
I0814 13:38:15.007] core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
I0814 13:38:15.007] (Bnamespace/test-secrets created
I0814 13:38:15.121] core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
I0814 13:38:15.243] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:15.332] (Bsecret/test-secret created
W0814 13:38:15.433] E0814 13:38:15.120935   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:15.434] E0814 13:38:15.257733   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:15.435] E0814 13:38:15.434953   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:15.536] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 13:38:15.556] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I0814 13:38:15.742] (Bsecret "test-secret" deleted
W0814 13:38:15.842] E0814 13:38:15.603435   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:15.943] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:15.947] (Bsecret/test-secret created
I0814 13:38:16.056] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 13:38:16.158] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I0814 13:38:16.354] (Bsecret "test-secret" deleted
W0814 13:38:16.455] E0814 13:38:16.122179   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:16.455] E0814 13:38:16.259623   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:16.456] E0814 13:38:16.436669   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:16.556] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:16.560] (Bsecret/test-secret created
W0814 13:38:16.661] E0814 13:38:16.605392   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:16.762] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 13:38:16.777] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 13:38:16.869] (Bsecret "test-secret" deleted
I0814 13:38:16.965] secret/test-secret created
I0814 13:38:17.074] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 13:38:17.177] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 13:38:17.263] (Bsecret "test-secret" deleted
W0814 13:38:17.363] E0814 13:38:17.124299   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:17.364] E0814 13:38:17.261288   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:17.440] E0814 13:38:17.439195   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:17.541] secret/secret-string-data created
I0814 13:38:17.555] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 13:38:17.670] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 13:38:17.770] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 13:38:17.864] (Bsecret "secret-string-data" deleted
W0814 13:38:17.965] E0814 13:38:17.607462   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:18.066] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:18.166] (Bsecret "test-secret" deleted
I0814 13:38:18.260] namespace "test-secrets" deleted
W0814 13:38:18.360] E0814 13:38:18.126284   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:18.361] E0814 13:38:18.262542   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:18.441] E0814 13:38:18.440982   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:18.610] E0814 13:38:18.609457   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:19.129] E0814 13:38:19.128910   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:19.265] E0814 13:38:19.264174   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:19.444] E0814 13:38:19.442761   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:19.613] E0814 13:38:19.611947   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:20.131] E0814 13:38:20.130961   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:20.266] E0814 13:38:20.265510   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:20.446] E0814 13:38:20.445881   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:20.614] E0814 13:38:20.613435   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:21.133] E0814 13:38:21.132248   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:21.268] E0814 13:38:21.267155   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:21.448] E0814 13:38:21.447987   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:21.615] E0814 13:38:21.614923   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:22.135] E0814 13:38:22.134352   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:22.270] E0814 13:38:22.269960   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:22.450] E0814 13:38:22.450074   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:22.617] E0814 13:38:22.617193   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:23.138] E0814 13:38:23.136969   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:23.271] E0814 13:38:23.271135   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:23.411] +++ exit code: 0
I0814 13:38:23.455] Recording: run_configmap_tests
I0814 13:38:23.455] Running command: run_configmap_tests
I0814 13:38:23.481] 
I0814 13:38:23.484] +++ Running case: test-cmd.run_configmap_tests 
I0814 13:38:23.487] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:23.489] +++ command: run_configmap_tests
I0814 13:38:23.504] +++ [0814 13:38:23] Creating namespace namespace-1565789903-29060
I0814 13:38:23.592] namespace/namespace-1565789903-29060 created
I0814 13:38:23.680] Context "test" modified.
I0814 13:38:23.690] +++ [0814 13:38:23] Testing configmaps
W0814 13:38:23.791] E0814 13:38:23.451934   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:23.792] E0814 13:38:23.619079   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:23.908] configmap/test-configmap created
I0814 13:38:24.030] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 13:38:24.125] (Bconfigmap "test-configmap" deleted
W0814 13:38:24.225] E0814 13:38:24.138926   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:24.274] E0814 13:38:24.274039   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:24.375] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 13:38:24.375] (Bnamespace/test-configmaps created
I0814 13:38:24.452] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
I0814 13:38:24.559] (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I0814 13:38:24.662] (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
I0814 13:38:24.753] (Bconfigmap/test-configmap created
I0814 13:38:24.853] configmap/test-binary-configmap created
I0814 13:38:24.959] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 13:38:25.069] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 13:38:25.369] (Bconfigmap "test-configmap" deleted
W0814 13:38:25.470] E0814 13:38:24.453300   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:25.470] E0814 13:38:24.620312   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:25.470] E0814 13:38:25.141259   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:25.471] E0814 13:38:25.275388   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:25.471] E0814 13:38:25.455432   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:25.572] configmap "test-binary-configmap" deleted
I0814 13:38:25.594] namespace "test-configmaps" deleted
W0814 13:38:25.695] E0814 13:38:25.622261   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:26.146] E0814 13:38:26.144694   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:26.278] E0814 13:38:26.277260   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:26.458] E0814 13:38:26.457046   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:26.626] E0814 13:38:26.624868   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:27.149] E0814 13:38:27.148011   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:27.279] E0814 13:38:27.278793   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:27.460] E0814 13:38:27.459977   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:27.627] E0814 13:38:27.626914   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:28.151] E0814 13:38:28.149946   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:28.283] E0814 13:38:28.281928   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:28.463] E0814 13:38:28.462190   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:28.630] E0814 13:38:28.628890   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:29.154] E0814 13:38:29.153123   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:29.285] E0814 13:38:29.283556   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:29.466] E0814 13:38:29.464616   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:29.631] E0814 13:38:29.630708   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:30.155] E0814 13:38:30.154815   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:30.286] E0814 13:38:30.285852   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:30.467] E0814 13:38:30.466898   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:30.633] E0814 13:38:30.632546   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:30.792] +++ exit code: 0
I0814 13:38:30.855] Recording: run_client_config_tests
I0814 13:38:30.855] Running command: run_client_config_tests
I0814 13:38:30.895] 
I0814 13:38:30.898] +++ Running case: test-cmd.run_client_config_tests 
I0814 13:38:30.903] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:30.907] +++ command: run_client_config_tests
I0814 13:38:30.927] +++ [0814 13:38:30] Creating namespace namespace-1565789910-15712
I0814 13:38:31.020] namespace/namespace-1565789910-15712 created
I0814 13:38:31.116] Context "test" modified.
I0814 13:38:31.127] +++ [0814 13:38:31] Testing client config
I0814 13:38:31.222] Successful
I0814 13:38:31.223] message:error: stat missing: no such file or directory
I0814 13:38:31.223] has:missing: no such file or directory
I0814 13:38:31.318] Successful
I0814 13:38:31.319] message:error: stat missing: no such file or directory
I0814 13:38:31.319] has:missing: no such file or directory
W0814 13:38:31.420] E0814 13:38:31.157152   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:31.421] E0814 13:38:31.287937   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:31.470] E0814 13:38:31.468956   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:31.571] Successful
I0814 13:38:31.572] message:error: stat missing: no such file or directory
I0814 13:38:31.572] has:missing: no such file or directory
I0814 13:38:31.573] Successful
I0814 13:38:31.573] message:Error in configuration: context was not found for specified context: missing-context
I0814 13:38:31.573] has:context was not found for specified context: missing-context
I0814 13:38:31.632] Successful
I0814 13:38:31.633] message:error: no server found for cluster "missing-cluster"
I0814 13:38:31.633] has:no server found for cluster "missing-cluster"
W0814 13:38:31.734] E0814 13:38:31.645441   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:31.835] Successful
I0814 13:38:31.836] message:error: auth info "missing-user" does not exist
I0814 13:38:31.836] has:auth info "missing-user" does not exist
I0814 13:38:31.966] Successful
I0814 13:38:31.966] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 13:38:31.966] has:error loading config file
I0814 13:38:32.084] Successful
I0814 13:38:32.085] message:error: stat missing-config: no such file or directory
I0814 13:38:32.085] has:no such file or directory
I0814 13:38:32.110] +++ exit code: 0
I0814 13:38:32.177] Recording: run_service_accounts_tests
I0814 13:38:32.177] Running command: run_service_accounts_tests
I0814 13:38:32.213] 
I0814 13:38:32.215] +++ Running case: test-cmd.run_service_accounts_tests 
I0814 13:38:32.221] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:32.224] +++ command: run_service_accounts_tests
I0814 13:38:32.245] +++ [0814 13:38:32] Creating namespace namespace-1565789912-9566
W0814 13:38:32.346] E0814 13:38:32.160000   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:32.346] E0814 13:38:32.290296   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:32.447] namespace/namespace-1565789912-9566 created
I0814 13:38:32.448] Context "test" modified.
I0814 13:38:32.461] +++ [0814 13:38:32] Testing service accounts
W0814 13:38:32.562] E0814 13:38:32.471086   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:32.648] E0814 13:38:32.647440   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:32.749] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I0814 13:38:32.749] (Bnamespace/test-service-accounts created
I0814 13:38:32.824] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 13:38:32.930] (Bserviceaccount/test-service-account created
I0814 13:38:33.063] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 13:38:33.180] (Bserviceaccount "test-service-account" deleted
W0814 13:38:33.281] E0814 13:38:33.162049   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:33.292] E0814 13:38:33.291789   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:33.393] namespace "test-service-accounts" deleted
W0814 13:38:33.495] E0814 13:38:33.475705   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:33.651] E0814 13:38:33.650153   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:34.165] E0814 13:38:34.164641   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:34.295] E0814 13:38:34.294188   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:34.480] E0814 13:38:34.479467   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:34.655] E0814 13:38:34.654100   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:35.167] E0814 13:38:35.166932   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:35.297] E0814 13:38:35.296820   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:35.482] E0814 13:38:35.481604   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:35.657] E0814 13:38:35.656124   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:36.170] E0814 13:38:36.169366   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:36.299] E0814 13:38:36.298722   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:36.485] E0814 13:38:36.484515   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:36.660] E0814 13:38:36.659021   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:37.172] E0814 13:38:37.171478   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:37.302] E0814 13:38:37.301077   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:37.488] E0814 13:38:37.486963   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:37.662] E0814 13:38:37.661839   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:38.175] E0814 13:38:38.174813   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:38.303] E0814 13:38:38.302949   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:38.489] E0814 13:38:38.488717   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:38.590] +++ exit code: 0
I0814 13:38:38.591] Recording: run_job_tests
I0814 13:38:38.591] Running command: run_job_tests
I0814 13:38:38.598] 
I0814 13:38:38.601] +++ Running case: test-cmd.run_job_tests 
I0814 13:38:38.606] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:38.608] +++ command: run_job_tests
I0814 13:38:38.628] +++ [0814 13:38:38] Creating namespace namespace-1565789918-9067
I0814 13:38:38.724] namespace/namespace-1565789918-9067 created
W0814 13:38:38.825] E0814 13:38:38.663838   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:38.925] Context "test" modified.
I0814 13:38:38.926] +++ [0814 13:38:38] Testing job
I0814 13:38:38.984] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
I0814 13:38:39.085] (Bnamespace/test-jobs created
W0814 13:38:39.186] E0814 13:38:39.177448   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:39.287] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I0814 13:38:39.312] (Bcronjob.batch/pi created
W0814 13:38:39.413] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 13:38:39.414] E0814 13:38:39.304539   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:39.492] E0814 13:38:39.491621   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:39.593] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I0814 13:38:39.594] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I0814 13:38:39.594] pi     59 23 31 2 *   False     0        <none>          0s
W0814 13:38:39.695] E0814 13:38:39.665774   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:39.796] Name:                          pi
I0814 13:38:39.797] Namespace:                     test-jobs
I0814 13:38:39.797] Labels:                        run=pi
I0814 13:38:39.797] Annotations:                   <none>
I0814 13:38:39.797] Schedule:                      59 23 31 2 *
I0814 13:38:39.797] Concurrency Policy:            Allow
I0814 13:38:39.798] Suspend:                       False
I0814 13:38:39.798] Successful Job History Limit:  3
I0814 13:38:39.798] Failed Job History Limit:      1
I0814 13:38:39.798] Starting Deadline Seconds:     <unset>
I0814 13:38:39.798] Selector:                      <unset>
I0814 13:38:39.798] Parallelism:                   <unset>
I0814 13:38:39.798] Completions:                   <unset>
I0814 13:38:39.798] Pod Template:
I0814 13:38:39.798]   Labels:  run=pi
... skipping 19 lines ...
I0814 13:38:39.833] Successful
I0814 13:38:39.834] message:job.batch/test-job
I0814 13:38:39.834] has:job.batch/test-job
I0814 13:38:39.965] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I0814 13:38:40.093] (Bjob.batch/test-job created
W0814 13:38:40.194] I0814 13:38:40.101354   53048 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"c0ca4d2d-d9d9-44c3-8329-9e5682f3b8b0", APIVersion:"batch/v1", ResourceVersion:"1375", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-w2klf
W0814 13:38:40.195] E0814 13:38:40.180850   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:40.295] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I0814 13:38:40.356] (BNAME       COMPLETIONS   DURATION   AGE
I0814 13:38:40.357] test-job   0/1           0s         0s
W0814 13:38:40.458] E0814 13:38:40.310136   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:40.494] E0814 13:38:40.493638   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:40.594] Name:           test-job
I0814 13:38:40.595] Namespace:      test-jobs
I0814 13:38:40.595] Selector:       controller-uid=c0ca4d2d-d9d9-44c3-8329-9e5682f3b8b0
I0814 13:38:40.595] Labels:         controller-uid=c0ca4d2d-d9d9-44c3-8329-9e5682f3b8b0
I0814 13:38:40.595]                 job-name=test-job
I0814 13:38:40.596]                 run=pi
I0814 13:38:40.596] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 13:38:40.596] Controlled By:  CronJob/pi
I0814 13:38:40.596] Parallelism:    1
I0814 13:38:40.596] Completions:    1
I0814 13:38:40.596] Start Time:     Wed, 14 Aug 2019 13:38:40 +0000
I0814 13:38:40.597] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 13:38:40.597] Pod Template:
I0814 13:38:40.597]   Labels:  controller-uid=c0ca4d2d-d9d9-44c3-8329-9e5682f3b8b0
I0814 13:38:40.597]            job-name=test-job
I0814 13:38:40.597]            run=pi
I0814 13:38:40.597]   Containers:
I0814 13:38:40.597]    pi:
... skipping 13 lines ...
I0814 13:38:40.598]   Volumes:        <none>
I0814 13:38:40.599] Events:
I0814 13:38:40.599]   Type    Reason            Age   From            Message
I0814 13:38:40.599]   ----    ------            ----  ----            -------
I0814 13:38:40.599]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-w2klf
I0814 13:38:40.612] job.batch "test-job" deleted
W0814 13:38:40.714] E0814 13:38:40.667985   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:40.815] cronjob.batch "pi" deleted
I0814 13:38:40.860] namespace "test-jobs" deleted
W0814 13:38:41.185] E0814 13:38:41.184081   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:41.313] E0814 13:38:41.312196   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:41.497] E0814 13:38:41.496678   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:41.671] E0814 13:38:41.670480   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:42.187] E0814 13:38:42.186683   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:42.316] E0814 13:38:42.315116   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:42.500] E0814 13:38:42.499181   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:42.673] E0814 13:38:42.672289   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:43.190] E0814 13:38:43.189325   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:43.318] E0814 13:38:43.317120   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:43.502] E0814 13:38:43.501581   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:43.676] E0814 13:38:43.675812   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:44.191] E0814 13:38:44.190879   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:44.321] E0814 13:38:44.320013   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:44.507] E0814 13:38:44.504826   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:44.679] E0814 13:38:44.678232   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:45.195] E0814 13:38:45.193300   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:45.324] E0814 13:38:45.323479   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:45.508] E0814 13:38:45.507617   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:45.681] E0814 13:38:45.680706   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:46.041] +++ exit code: 0
I0814 13:38:46.087] Recording: run_create_job_tests
I0814 13:38:46.088] Running command: run_create_job_tests
I0814 13:38:46.116] 
I0814 13:38:46.120] +++ Running case: test-cmd.run_create_job_tests 
I0814 13:38:46.123] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:46.125] +++ command: run_create_job_tests
I0814 13:38:46.142] +++ [0814 13:38:46] Creating namespace namespace-1565789926-7292
I0814 13:38:46.235] namespace/namespace-1565789926-7292 created
I0814 13:38:46.325] Context "test" modified.
W0814 13:38:46.426] E0814 13:38:46.194948   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:46.427] E0814 13:38:46.326216   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:46.444] I0814 13:38:46.443663   53048 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565789926-7292", Name:"test-job", UID:"22c743b8-c629-4e49-948e-8847a58049ab", APIVersion:"batch/v1", ResourceVersion:"1393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-7zstj
W0814 13:38:46.511] E0814 13:38:46.510297   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:46.612] job.batch/test-job created
I0814 13:38:46.612] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0814 13:38:46.685] (Bjob.batch "test-job" deleted
W0814 13:38:46.786] E0814 13:38:46.682427   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:46.795] I0814 13:38:46.794895   53048 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565789926-7292", Name:"test-job-pi", UID:"dec2b833-af42-4f0a-9f76-8cad38f0a716", APIVersion:"batch/v1", ResourceVersion:"1400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-nvzhn
I0814 13:38:46.896] job.batch/test-job-pi created
I0814 13:38:46.930] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0814 13:38:47.038] (Bjob.batch "test-job-pi" deleted
W0814 13:38:47.140] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 13:38:47.197] E0814 13:38:47.196894   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:47.278] I0814 13:38:47.277868   53048 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565789926-7292", Name:"my-pi", UID:"d7dd153c-2053-491e-8fe4-76fad1edbea3", APIVersion:"batch/v1", ResourceVersion:"1408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-rq4fm
W0814 13:38:47.330] E0814 13:38:47.328944   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:47.430] cronjob.batch/test-pi created
I0814 13:38:47.431] job.batch/my-pi created
I0814 13:38:47.431] Successful
I0814 13:38:47.431] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0814 13:38:47.431] has:perl -Mbignum=bpi -wle print bpi(10)
W0814 13:38:47.532] I0814 13:38:47.491938   53048 event.go:255] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"namespace-1565789926-7292", Name:"test-pi", UID:"ed1a46e4-6d72-43fa-8977-1eb38492edb2", APIVersion:"batch/v1beta1", ResourceVersion:"1407", FieldPath:""}): type: 'Warning' reason: 'UnexpectedJob' Saw a job that the controller did not create or forgot: my-pi
W0814 13:38:47.533] E0814 13:38:47.513703   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:47.633] job.batch "my-pi" deleted
I0814 13:38:47.648] cronjob.batch "test-pi" deleted
I0814 13:38:47.684] +++ exit code: 0
I0814 13:38:47.760] Recording: run_pod_templates_tests
I0814 13:38:47.760] Running command: run_pod_templates_tests
I0814 13:38:47.799] 
I0814 13:38:47.804] +++ Running case: test-cmd.run_pod_templates_tests 
I0814 13:38:47.809] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:47.813] +++ command: run_pod_templates_tests
I0814 13:38:47.838] +++ [0814 13:38:47] Creating namespace namespace-1565789927-25138
W0814 13:38:47.939] E0814 13:38:47.684840   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:48.040] namespace/namespace-1565789927-25138 created
I0814 13:38:48.041] Context "test" modified.
I0814 13:38:48.052] +++ [0814 13:38:48] Testing pod templates
I0814 13:38:48.187] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:48.390] (Bpodtemplate/nginx created
W0814 13:38:48.492] E0814 13:38:48.199845   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:48.492] E0814 13:38:48.331885   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:48.492] I0814 13:38:48.385325   49595 controller.go:606] quota admission added evaluator for: podtemplates
W0814 13:38:48.516] E0814 13:38:48.515860   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:48.618] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 13:38:48.647] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 13:38:48.647] nginx   nginx        nginx    name=nginx
W0814 13:38:48.749] E0814 13:38:48.687380   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:48.900] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 13:38:49.004] (Bpodtemplate "nginx" deleted
I0814 13:38:49.133] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 13:38:49.153] (B+++ exit code: 0
I0814 13:38:49.222] Recording: run_service_tests
I0814 13:38:49.222] Running command: run_service_tests
I0814 13:38:49.257] 
I0814 13:38:49.262] +++ Running case: test-cmd.run_service_tests 
I0814 13:38:49.267] +++ working dir: /go/src/k8s.io/kubernetes
I0814 13:38:49.272] +++ command: run_service_tests
I0814 13:38:49.372] Context "test" modified.
I0814 13:38:49.384] +++ [0814 13:38:49] Testing kubectl(v1:services)
W0814 13:38:49.485] E0814 13:38:49.201760   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:49.486] E0814 13:38:49.334317   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:49.518] E0814 13:38:49.517364   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:49.619] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 13:38:49.735] (Bservice/redis-master created
W0814 13:38:49.836] E0814 13:38:49.688895   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:49.937] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 13:38:50.071] (Bcore.sh:864: Successful describe services redis-master:
I0814 13:38:50.071] Name:              redis-master
I0814 13:38:50.072] Namespace:         default
I0814 13:38:50.072] Labels:            app=redis
I0814 13:38:50.072]                    role=master
... skipping 20 lines ...
I0814 13:38:50.219] Port:              <unset>  6379/TCP
I0814 13:38:50.219] TargetPort:        6379/TCP
I0814 13:38:50.219] Endpoints:         <none>
I0814 13:38:50.220] Session Affinity:  None
I0814 13:38:50.220] Events:            <none>
I0814 13:38:50.220] (B
W0814 13:38:50.321] E0814 13:38:50.203155   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:50.336] E0814 13:38:50.335966   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:50.438] core.sh:868: Successful describe
I0814 13:38:50.439] Name:              redis-master
I0814 13:38:50.439] Namespace:         default
I0814 13:38:50.440] Labels:            app=redis
I0814 13:38:50.440]                    role=master
I0814 13:38:50.440]                    tier=backend
... skipping 19 lines ...
I0814 13:38:50.518] Port:              <unset>  6379/TCP
I0814 13:38:50.518] TargetPort:        6379/TCP
I0814 13:38:50.518] Endpoints:         <none>
I0814 13:38:50.518] Session Affinity:  None
I0814 13:38:50.518] Events:            <none>
I0814 13:38:50.519] (B
W0814 13:38:50.619] E0814 13:38:50.519242   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:50.692] E0814 13:38:50.691210   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:50.793] Successful describe services:
I0814 13:38:50.793] Name:              kubernetes
I0814 13:38:50.793] Namespace:         default
I0814 13:38:50.794] Labels:            component=apiserver
I0814 13:38:50.794]                    provider=kubernetes
I0814 13:38:50.794] Annotations:       <none>
... skipping 124 lines ...
I0814 13:38:51.394]   - port: 6379
I0814 13:38:51.394]     targetPort: 6379
I0814 13:38:51.394]   selector:
I0814 13:38:51.394]     role: padawan
I0814 13:38:51.394] status:
I0814 13:38:51.394]   loadBalancer: {}
W0814 13:38:51.495] E0814 13:38:51.204560   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:51.496] E0814 13:38:51.338127   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 13:38:51.521] E0814 13:38:51.520818   53048 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 13:38:51.622] apiVersion: v1
I0814 13:38:51.622] kind: Service
I0814 13:38:51.623] metadata:
I0814 13:38:51.623]   creationTimestamp: "2019-08-14T13:38:49Z"
I0814 13:38:51.623]   labels:
I0814 13:38:51.624]     app: redis