This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: use named array instead of array in normalizing score
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 06:28
Elapsed26m39s
Revision
Buildergke-prow-ssd-pool-1a225945-4sp7
Refs master:a520302f
80901:aa5f9fda
pod96cd5f77-be5c-11e9-bd2d-f6f3c4187ecc
infra-commit89e6e9743
pod96cd5f77-be5c-11e9-bd2d-f6f3c4187ecc
repok8s.io/kubernetes
repo-commit11b635fd98189f524c06c025687efc0fe976b5ff
repos{u'k8s.io/kubernetes': u'master:a520302fb4673e595fcb70d2a4db26598371be92,80901:aa5f9fda52d0171e45682254e0d37b16f58ae6fc'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 06:49:32.387412  110531 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 06:49:32.387493  110531 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 06:49:32.387506  110531 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 06:49:32.387514  110531 master.go:234] Using reconciler: 
I0814 06:49:32.388780  110531 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.388936  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.389075  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.389115  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.389159  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.389660  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.389783  110531 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 06:49:32.389810  110531 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.389872  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.389920  110531 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 06:49:32.389935  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.389944  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.389965  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.390193  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.390781  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.390915  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.391090  110531 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 06:49:32.391098  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.391112  110531 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.391138  110531 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 06:49:32.391162  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.391169  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.391189  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.391322  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.391548  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.391662  110531 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 06:49:32.391687  110531 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.391746  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.391755  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.391780  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.391816  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.391841  110531 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 06:49:32.392072  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.392998  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.393139  110531 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 06:49:32.393180  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.393300  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.393292  110531 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.393380  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.393391  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.393414  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.393420  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.393445  110531 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 06:49:32.393478  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.394305  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.394348  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.394424  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.394466  110531 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 06:49:32.394502  110531 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 06:49:32.394609  110531 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.394701  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.394711  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.394740  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.394781  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.395109  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.395437  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.395580  110531 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 06:49:32.395661  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.395722  110531 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.395768  110531 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 06:49:32.395831  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.395845  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.395877  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.395971  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.396240  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.396343  110531 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 06:49:32.396471  110531 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.396537  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.396548  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.396579  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.396628  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.396657  110531 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 06:49:32.396842  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.397111  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.397239  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.397307  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.397351  110531 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 06:49:32.397427  110531 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 06:49:32.397475  110531 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.397538  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.397549  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.397576  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.397620  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.397887  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.398038  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.398323  110531 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 06:49:32.398607  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.399141  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.399644  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.400254  110531 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 06:49:32.400394  110531 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.400460  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.400470  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.400500  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.400621  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.400938  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.401055  110531 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 06:49:32.401180  110531 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.401242  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.401253  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.401283  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.401334  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.401392  110531 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 06:49:32.401737  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.402065  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.402179  110531 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 06:49:32.402207  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.402245  110531 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 06:49:32.402360  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.402489  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.402502  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.402535  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.402603  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.402753  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.402960  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.403049  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.403089  110531 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 06:49:32.403216  110531 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.403282  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.403292  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.403319  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.403354  110531 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 06:49:32.403443  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.403466  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.404156  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.404558  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.404770  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.404900  110531 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 06:49:32.405039  110531 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.405106  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.405117  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.405148  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.405186  110531 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 06:49:32.405742  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.406059  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.406210  110531 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 06:49:32.406235  110531 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.406297  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.406321  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.406332  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.406337  110531 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 06:49:32.406368  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.406474  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.406709  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.406816  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.406851  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.407052  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.407092  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.407134  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.407429  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.407573  110531 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.407641  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.407651  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.407677  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.407797  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.407869  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.407923  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.408410  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.409220  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.409267  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.409327  110531 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 06:49:32.409374  110531 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 06:49:32.410809  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.411683  110531 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.411854  110531 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.412418  110531 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.412987  110531 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.413450  110531 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.413906  110531 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.414367  110531 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.414553  110531 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.414716  110531 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.415194  110531 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.415662  110531 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.415790  110531 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.416338  110531 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.416513  110531 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.416831  110531 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.416965  110531 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.417461  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.417595  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.417682  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.417747  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.417896  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.418057  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.418200  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.418699  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.418893  110531 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.419431  110531 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.419974  110531 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.420178  110531 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.420492  110531 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.421078  110531 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.421315  110531 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.421853  110531 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.422481  110531 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.422923  110531 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.423543  110531 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.423730  110531 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.423810  110531 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 06:49:32.423823  110531 master.go:434] Enabling API group "authentication.k8s.io".
I0814 06:49:32.423833  110531 master.go:434] Enabling API group "authorization.k8s.io".
I0814 06:49:32.423942  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.424029  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.424042  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.424078  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.424123  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.424617  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.424795  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.424962  110531 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 06:49:32.425067  110531 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 06:49:32.425131  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.425200  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.425211  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.425244  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.425295  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.426317  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.426378  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.426421  110531 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 06:49:32.426494  110531 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 06:49:32.426551  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.426629  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.426639  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.426673  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.426710  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.427202  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.427246  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.427299  110531 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 06:49:32.427316  110531 master.go:434] Enabling API group "autoscaling".
I0814 06:49:32.427372  110531 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 06:49:32.427460  110531 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.427533  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.427546  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.427577  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.427628  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.427852  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.427954  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.427968  110531 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 06:49:32.427994  110531 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 06:49:32.428141  110531 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.428208  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.428218  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.428250  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.428310  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.428356  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.428623  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.428715  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.428742  110531 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 06:49:32.428759  110531 master.go:434] Enabling API group "batch".
I0814 06:49:32.428884  110531 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.428949  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.428960  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.428988  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.429140  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.429163  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.429184  110531 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 06:49:32.429352  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.429593  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.429682  110531 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 06:49:32.429698  110531 master.go:434] Enabling API group "certificates.k8s.io".
I0814 06:49:32.429703  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.429822  110531 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.429859  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.429879  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.429888  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.429917  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.429961  110531 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 06:49:32.430243  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.430886  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.431220  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.431744  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.431848  110531 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 06:49:32.431871  110531 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 06:49:32.431964  110531 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.431989  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.432036  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.432046  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.432073  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.432156  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.432378  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.432446  110531 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 06:49:32.432457  110531 master.go:434] Enabling API group "coordination.k8s.io".
I0814 06:49:32.432463  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.432484  110531 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 06:49:32.432570  110531 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.432623  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.432632  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.432657  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.432697  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.433672  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.433765  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.436578  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.436628  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.436719  110531 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 06:49:32.436743  110531 master.go:434] Enabling API group "extensions".
I0814 06:49:32.436746  110531 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 06:49:32.436886  110531 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.436957  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.436968  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.436999  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.437120  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.437368  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.437427  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.437440  110531 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 06:49:32.437475  110531 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 06:49:32.437573  110531 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.437638  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.437648  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.437676  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.437715  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.437963  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.437997  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.438055  110531 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 06:49:32.438075  110531 master.go:434] Enabling API group "networking.k8s.io".
I0814 06:49:32.438104  110531 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.438127  110531 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 06:49:32.438200  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.438210  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.438236  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.438316  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.438552  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.438579  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.438628  110531 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 06:49:32.438642  110531 master.go:434] Enabling API group "node.k8s.io".
I0814 06:49:32.438682  110531 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 06:49:32.438764  110531 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.438823  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.438832  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.438860  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.438901  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.439139  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.439228  110531 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 06:49:32.439263  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.439293  110531 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 06:49:32.439378  110531 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.439439  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.439449  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.439478  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.439530  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.439749  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.439931  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.439981  110531 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 06:49:32.439995  110531 master.go:434] Enabling API group "policy".
I0814 06:49:32.440051  110531 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.440109  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.440119  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.440124  110531 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 06:49:32.440147  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.440305  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.440564  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.440678  110531 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 06:49:32.440800  110531 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.440863  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.440873  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.440903  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.440941  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.440970  110531 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 06:49:32.441220  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.443571  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.443713  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.443932  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.443984  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.444096  110531 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 06:49:32.444102  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.444126  110531 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.444184  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.444193  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.444217  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.444220  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.444301  110531 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 06:49:32.444421  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.444440  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.444715  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.444858  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.444940  110531 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 06:49:32.444948  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.444975  110531 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 06:49:32.445117  110531 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.445180  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.445190  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.445213  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.445259  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.445450  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.445503  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.445556  110531 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 06:49:32.445594  110531 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.445636  110531 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 06:49:32.445650  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.445659  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.445684  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.445737  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.446026  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.446123  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.446162  110531 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 06:49:32.446296  110531 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.446350  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.446360  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.446390  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.446453  110531 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 06:49:32.446630  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.447457  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.447507  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.447562  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.447570  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.448579  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.448618  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.448753  110531 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 06:49:32.448784  110531 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.448844  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.448853  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.448881  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.449111  110531 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 06:49:32.449236  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.449529  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.449657  110531 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 06:49:32.449709  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.449802  110531 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.449938  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.449959  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.449991  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.450032  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.450094  110531 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 06:49:32.450201  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.450423  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.450489  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.450513  110531 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 06:49:32.450540  110531 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 06:49:32.450569  110531 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 06:49:32.451494  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.452227  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.452326  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.452436  110531 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.452552  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.452564  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.452593  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.452639  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.452882  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.452978  110531 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 06:49:32.453143  110531 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.453210  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.453222  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.453248  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.453341  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.453369  110531 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 06:49:32.454693  110531 watch_cache.go:405] Replace watchCache (rev: 28864) 
I0814 06:49:32.454879  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.455655  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.455744  110531 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 06:49:32.455759  110531 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 06:49:32.455885  110531 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 06:49:32.455999  110531 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.456072  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.456082  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.456112  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.456140  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.456160  110531 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 06:49:32.456304  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.456931  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.457033  110531 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 06:49:32.457101  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.457129  110531 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.457152  110531 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 06:49:32.457183  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.457189  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.457234  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.457477  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.457580  110531 watch_cache.go:405] Replace watchCache (rev: 28865) 
I0814 06:49:32.457685  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.457770  110531 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 06:49:32.457780  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.457797  110531 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.457812  110531 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 06:49:32.457849  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.457859  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.457883  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.458058  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.458280  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.458341  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.458347  110531 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 06:49:32.458372  110531 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.458392  110531 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 06:49:32.458428  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.458744  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.458798  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.458839  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.460055  110531 watch_cache.go:405] Replace watchCache (rev: 28865) 
I0814 06:49:32.460426  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.460457  110531 watch_cache.go:405] Replace watchCache (rev: 28865) 
I0814 06:49:32.460482  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.460639  110531 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 06:49:32.460668  110531 watch_cache.go:405] Replace watchCache (rev: 28865) 
I0814 06:49:32.460786  110531 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.460853  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.460864  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.460897  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.460971  110531 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 06:49:32.461511  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.461716  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.461774  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.461806  110531 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 06:49:32.461835  110531 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 06:49:32.461920  110531 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.461973  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.461983  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.462009  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.462091  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.463048  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.463129  110531 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 06:49:32.463174  110531 master.go:434] Enabling API group "storage.k8s.io".
I0814 06:49:32.463292  110531 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.463304  110531 watch_cache.go:405] Replace watchCache (rev: 28866) 
I0814 06:49:32.463346  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.463355  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.463380  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.463424  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.463450  110531 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 06:49:32.463579  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.463793  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.463885  110531 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 06:49:32.463994  110531 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.464065  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.464076  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.464101  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.464149  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.464176  110531 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 06:49:32.464343  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.464586  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.464671  110531 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 06:49:32.464765  110531 watch_cache.go:405] Replace watchCache (rev: 28866) 
I0814 06:49:32.464792  110531 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.464843  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.464852  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.464907  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.464937  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.464989  110531 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 06:49:32.465147  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.465333  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.465425  110531 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 06:49:32.465440  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.465532  110531 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.465585  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.465594  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.465621  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.465713  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.465742  110531 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 06:49:32.465919  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.466093  110531 watch_cache.go:405] Replace watchCache (rev: 28866) 
I0814 06:49:32.466623  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.466692  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.466817  110531 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 06:49:32.466924  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.466955  110531 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.467048  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.467061  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.467097  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.467139  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.467222  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.467269  110531 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 06:49:32.467463  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.467488  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.467569  110531 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 06:49:32.467584  110531 master.go:434] Enabling API group "apps".
I0814 06:49:32.467608  110531 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 06:49:32.467613  110531 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.467676  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.467686  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.467715  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.467872  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.467945  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.468135  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.468171  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.468211  110531 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 06:49:32.468239  110531 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.468269  110531 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 06:49:32.468312  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.468323  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.468351  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.468483  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.468526  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.468760  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.468788  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.468851  110531 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 06:49:32.468880  110531 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.468943  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.468948  110531 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 06:49:32.468953  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.469088  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.469126  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.469469  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.469537  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.469549  110531 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 06:49:32.469579  110531 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.469626  110531 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 06:49:32.469669  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.469679  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.469706  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.469805  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.470053  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.470098  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.470141  110531 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 06:49:32.470154  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.470155  110531 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 06:49:32.470187  110531 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.470234  110531 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 06:49:32.470355  110531 client.go:354] parsed scheme: ""
I0814 06:49:32.470367  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:32.470395  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:32.470492  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.470863  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:32.470917  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.470987  110531 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 06:49:32.471008  110531 master.go:434] Enabling API group "events.k8s.io".
I0814 06:49:32.471215  110531 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.471268  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:32.471365  110531 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 06:49:32.471377  110531 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.471607  110531 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.471702  110531 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.471810  110531 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.471893  110531 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.471985  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.472063  110531 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.472144  110531 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.472222  110531 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.472353  110531 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.472698  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.473225  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.473482  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.474211  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.474371  110531 watch_cache.go:405] Replace watchCache (rev: 28867) 
I0814 06:49:32.474431  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.475153  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.475386  110531 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.476220  110531 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.476570  110531 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.477319  110531 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.477598  110531 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:49:32.477710  110531 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 06:49:32.478408  110531 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.478629  110531 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.479003  110531 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.479949  110531 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.480687  110531 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.481484  110531 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.481802  110531 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.482846  110531 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.483624  110531 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.483960  110531 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.484697  110531 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:49:32.484833  110531 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 06:49:32.485660  110531 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.485978  110531 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.486579  110531 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.487252  110531 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.487753  110531 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.488551  110531 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.489262  110531 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.489912  110531 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.490472  110531 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.491167  110531 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.491789  110531 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:49:32.491902  110531 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 06:49:32.492476  110531 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.493099  110531 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:49:32.493251  110531 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 06:49:32.493915  110531 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.494497  110531 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.494745  110531 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.495307  110531 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.495794  110531 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.496351  110531 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.497033  110531 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:49:32.497147  110531 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 06:49:32.497986  110531 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.498762  110531 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.499087  110531 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.499795  110531 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.500140  110531 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.500413  110531 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.501203  110531 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.501495  110531 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.501755  110531 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.502526  110531 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.502910  110531 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.503217  110531 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:49:32.503326  110531 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 06:49:32.503376  110531 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 06:49:32.504006  110531 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.504836  110531 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.505521  110531 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.506157  110531 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.506929  110531 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"30e54db6-0098-471b-9e2c-3a567dd0818e", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:49:32.509621  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.509647  110531 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 06:49:32.509655  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.509662  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.509668  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.509674  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.509700  110531 httplog.go:90] GET /healthz: (160.098µs) 0 [Go-http-client/1.1 127.0.0.1:38060]
I0814 06:49:32.511463  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.525418ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38062]
I0814 06:49:32.514176  110531 httplog.go:90] GET /api/v1/services: (1.189654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38062]
I0814 06:49:32.517678  110531 httplog.go:90] GET /api/v1/services: (797.612µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38062]
I0814 06:49:32.519666  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.519861  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.520758  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.071655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38060]
I0814 06:49:32.520846  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.520862  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.520871  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.520897  110531 httplog.go:90] GET /healthz: (1.303121ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38062]
I0814 06:49:32.521717  110531 httplog.go:90] GET /api/v1/services: (853.007µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.522198  110531 httplog.go:90] GET /api/v1/services: (1.513635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38064]
I0814 06:49:32.522415  110531 httplog.go:90] POST /api/v1/namespaces: (1.337595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38060]
I0814 06:49:32.523501  110531 httplog.go:90] GET /api/v1/namespaces/kube-public: (804.328µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.525235  110531 httplog.go:90] POST /api/v1/namespaces: (1.425093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.526552  110531 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (863.68µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.528163  110531 httplog.go:90] POST /api/v1/namespaces: (1.222028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.610552  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.610594  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.610608  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.610618  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.610626  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.610686  110531 httplog.go:90] GET /healthz: (267.336µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:32.621696  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.621728  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.621740  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.621750  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.621758  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.621790  110531 httplog.go:90] GET /healthz: (229.202µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.710526  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.710572  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.710585  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.710595  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.710603  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.710633  110531 httplog.go:90] GET /healthz: (242.856µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:32.721567  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.721594  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.721606  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.721616  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.721640  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.721668  110531 httplog.go:90] GET /healthz: (225.054µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.810477  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.810516  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.810529  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.810540  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.810548  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.810590  110531 httplog.go:90] GET /healthz: (245.025µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:32.821477  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.821512  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.821525  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.821535  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.821542  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.821569  110531 httplog.go:90] GET /healthz: (215.769µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:32.911337  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.911368  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.911381  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.911391  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.911399  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.911427  110531 httplog.go:90] GET /healthz: (263.147µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:32.921617  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:32.921652  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:32.921664  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:32.921674  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:32.921682  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:32.921711  110531 httplog.go:90] GET /healthz: (259.764µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.010481  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.010517  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.010531  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.010541  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.010549  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.010576  110531 httplog.go:90] GET /healthz: (246.99µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:33.021681  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.021718  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.021731  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.021740  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.021747  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.021781  110531 httplog.go:90] GET /healthz: (239.91µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.110529  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.110559  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.110572  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.110582  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.110591  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.110627  110531 httplog.go:90] GET /healthz: (235.471µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:33.121523  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.121566  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.121579  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.121590  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.121597  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.121633  110531 httplog.go:90] GET /healthz: (230.136µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.213786  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.213817  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.213829  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.213839  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.213847  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.213886  110531 httplog.go:90] GET /healthz: (250.839µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:33.221471  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.221500  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.221511  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.221519  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.221526  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.221550  110531 httplog.go:90] GET /healthz: (199.182µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.310460  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.310498  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.310511  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.310521  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.310533  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.310563  110531 httplog.go:90] GET /healthz: (237.599µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:33.321545  110531 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:49:33.321585  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.321598  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.321608  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.321616  110531 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.321645  110531 httplog.go:90] GET /healthz: (227.172µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.387742  110531 client.go:354] parsed scheme: ""
I0814 06:49:33.387781  110531 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:49:33.387832  110531 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:49:33.387906  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:33.388703  110531 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:49:33.388756  110531 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:49:33.412539  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.412576  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.412587  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.412595  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.412633  110531 httplog.go:90] GET /healthz: (2.220631ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:33.422303  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.422335  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.422346  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.422354  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.422413  110531 httplog.go:90] GET /healthz: (906.122µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.511110  110531 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.6526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.511161  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.708614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38062]
I0814 06:49:33.511495  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.511519  110531 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:49:33.511530  110531 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:49:33.511539  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:49:33.511587  110531 httplog.go:90] GET /healthz: (1.066694ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:33.512842  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.307702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38204]
I0814 06:49:33.512854  110531 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (919.776µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38062]
I0814 06:49:33.513578  110531 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.727136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.513829  110531 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 06:49:33.514111  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (918.888µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38204]
I0814 06:49:33.514590  110531 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.283546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.515580  110531 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.140544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.516339  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.76569ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38204]
I0814 06:49:33.517631  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.036873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38204]
I0814 06:49:33.518154  110531 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.260817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.518306  110531 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 06:49:33.518318  110531 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 06:49:33.519265  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (921.128µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38204]
I0814 06:49:33.520805  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.04913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.521971  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.521998  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.522099  110531 httplog.go:90] GET /healthz: (784.668µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.522171  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.090145ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.523172  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (722.697µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.524440  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (885.111µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.525945  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.160552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.527583  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.169343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.527747  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 06:49:33.528687  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (728.208µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.530812  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.690118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.530937  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 06:49:33.531803  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (681.364µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.533566  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.533738  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 06:49:33.534575  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (686.245µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.536219  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.359221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.536399  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 06:49:33.537224  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (667.138µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.539295  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.551237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.539450  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 06:49:33.540204  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (658.944µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.541896  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.542064  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 06:49:33.543229  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (876.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.544810  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.251727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.544966  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 06:49:33.545982  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (715.086µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.548758  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (993.704µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.549068  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 06:49:33.550009  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (730.907µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.551896  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.296644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.552348  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 06:49:33.553240  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (686.971µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.554658  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.134408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.554911  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 06:49:33.555807  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (720.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.558049  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.753882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.558262  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 06:49:33.559193  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (738.771µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.560777  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.31898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.561134  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 06:49:33.562114  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (760.309µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.589555  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (27.011132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.590090  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 06:49:33.592185  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.350951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.594748  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.208542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.595084  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 06:49:33.597430  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (2.197161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.603317  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.585955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.603539  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 06:49:33.604720  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (918.222µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.606459  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.606622  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 06:49:33.607902  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (970.031µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.610164  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.610319  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 06:49:33.610921  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.610976  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.611007  110531 httplog.go:90] GET /healthz: (826.562µs) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:33.612397  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.79369ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.614057  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.390797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.614220  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 06:49:33.615325  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (871.918µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.618739  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.251736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.619221  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 06:49:33.620059  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (698.029µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.621826  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.429855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.622100  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.622229  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.622520  110531 httplog.go:90] GET /healthz: (1.201866ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.622152  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 06:49:33.623823  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (895.112µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.625110  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (977.72µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.625353  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 06:49:33.626247  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (750.091µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.627689  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.115063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.628350  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 06:49:33.629142  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (629.883µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.630585  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.138854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.630846  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 06:49:33.632292  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.264668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.634605  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.998637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.634761  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 06:49:33.635644  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (735.788µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.637278  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.180052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.637589  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 06:49:33.638733  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (909.034µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.640952  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.734195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.641261  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 06:49:33.642543  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.111583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.644450  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.644648  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 06:49:33.645414  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (612.345µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.646891  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.166646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.647082  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 06:49:33.648537  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.276637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.650080  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.146734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.650246  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 06:49:33.651037  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (627.658µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.652368  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.091906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.652533  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 06:49:33.653418  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (712.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.654883  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.129667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.655077  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 06:49:33.655938  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (668.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.657780  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.657957  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 06:49:33.658908  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (716.881µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.660797  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.660987  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 06:49:33.661932  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (639.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.663570  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.305774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.663735  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 06:49:33.664701  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (760.774µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.666521  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.350727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.666951  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 06:49:33.668536  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (663.625µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.669903  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.120218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.670122  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 06:49:33.671075  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (700.094µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.672802  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.199116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.673045  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 06:49:33.673927  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (709.056µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.675376  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.086325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.676592  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 06:49:33.678620  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (600.559µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.680220  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.029603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.680385  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 06:49:33.681400  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (865.976µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.685903  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.178238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.686180  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 06:49:33.687517  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.067474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.689176  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.213828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.689363  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 06:49:33.690146  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (602.847µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.691609  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.126848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.691820  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 06:49:33.692777  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (725.217µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.694487  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.139637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.694695  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 06:49:33.695682  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (809.518µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.698202  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.979317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.698449  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 06:49:33.699595  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (977.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.701884  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.978367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.702069  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 06:49:33.702975  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (777.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.704699  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.704994  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 06:49:33.705925  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (681.425µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.707609  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.185622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.707912  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 06:49:33.709160  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (834.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.710991  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.16441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:33.711206  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.711236  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.711266  110531 httplog.go:90] GET /healthz: (1.027421ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:33.711402  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 06:49:33.712352  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (766.488µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.714059  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.121866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.714318  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 06:49:33.715583  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.006984ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.717392  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.212178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.717608  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 06:49:33.718674  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (854.327µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.721974  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.722026  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.722053  110531 httplog.go:90] GET /healthz: (732.537µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.732582  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.474896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.732941  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 06:49:33.751711  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.336168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.772090  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.710428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.772386  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 06:49:33.792144  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.026983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.810973  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.811003  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.811040  110531 httplog.go:90] GET /healthz: (736.024µs) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:33.811926  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.680824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.812150  110531 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 06:49:33.822536  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.822571  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.822604  110531 httplog.go:90] GET /healthz: (1.255387ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.831976  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.570396ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.852246  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.013307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.852472  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 06:49:33.871249  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.023358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.892256  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.024692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.892479  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 06:49:33.911154  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.911185  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.911219  110531 httplog.go:90] GET /healthz: (920.373µs) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:33.912258  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.965922ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.922930  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:33.922960  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:33.922993  110531 httplog.go:90] GET /healthz: (880.067µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.932473  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.932787  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 06:49:33.951171  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (998.365µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.972436  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.992049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:33.972694  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 06:49:33.992031  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.814489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.014831  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.014862  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.014899  110531 httplog.go:90] GET /healthz: (4.546828ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:34.015162  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.495798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.015380  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 06:49:34.022097  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.022124  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.022156  110531 httplog.go:90] GET /healthz: (778.065µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.031422  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.146947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.052146  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.960815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.052364  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 06:49:34.071282  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.03046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.092472  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.26419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.092698  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 06:49:34.111493  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.111519  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.111559  110531 httplog.go:90] GET /healthz: (1.111699ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:34.111898  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.382714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.124723  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.124752  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.124783  110531 httplog.go:90] GET /healthz: (851.01µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.132517  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.155151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.132689  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 06:49:34.152261  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.101815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.171594  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.367462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.171806  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 06:49:34.191822  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.648452ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.211963  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.211992  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.212043  110531 httplog.go:90] GET /healthz: (1.757699ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:34.212292  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.117248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.212482  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 06:49:34.222082  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.222110  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.222147  110531 httplog.go:90] GET /healthz: (775.853µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.231919  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.32657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.252276  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.711523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.252491  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 06:49:34.272610  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.919746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.292175  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.821714ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.292971  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 06:49:34.312316  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.312349  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.312386  110531 httplog.go:90] GET /healthz: (1.669277ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:34.312425  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.378858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.323261  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.323294  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.323326  110531 httplog.go:90] GET /healthz: (879.106µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.334863  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.907283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.335090  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 06:49:34.351231  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.029946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.372232  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.012117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.372541  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 06:49:34.393152  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.494187ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.412800  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.412839  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.412877  110531 httplog.go:90] GET /healthz: (2.612868ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:34.413137  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.91301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.413315  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 06:49:34.422518  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.422546  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.422585  110531 httplog.go:90] GET /healthz: (1.225879ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.433547  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (3.311301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.452718  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.222735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.452923  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 06:49:34.471665  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (924.682µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.492695  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.492899  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 06:49:34.512817  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.512844  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.512878  110531 httplog.go:90] GET /healthz: (1.040522ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:34.513142  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.686158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.521945  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.521975  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.522006  110531 httplog.go:90] GET /healthz: (658.979µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.532174  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.922422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.532357  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 06:49:34.552764  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (2.608362ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.572520  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.247574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.572838  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 06:49:34.591343  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.112243ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.611698  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.611721  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.611753  110531 httplog.go:90] GET /healthz: (1.361429ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:34.612405  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.140746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.612813  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 06:49:34.622106  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.622132  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.622225  110531 httplog.go:90] GET /healthz: (854.217µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.632495  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.276562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.652934  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.909764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.653335  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 06:49:34.671425  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.249298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.691783  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.510099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.692135  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 06:49:34.711728  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.711756  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.711770  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.285292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.711780  110531 httplog.go:90] GET /healthz: (1.308661ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:34.721964  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.722002  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.722377  110531 httplog.go:90] GET /healthz: (1.029939ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.731875  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.656063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.732212  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 06:49:34.751609  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.338867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.773047  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.566165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.773242  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 06:49:34.791143  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (933.609µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.811564  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.811586  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.811615  110531 httplog.go:90] GET /healthz: (856.099µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:34.813519  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.757776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.813811  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 06:49:34.822054  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.822085  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.822113  110531 httplog.go:90] GET /healthz: (769.744µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.832437  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (920.311µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.852203  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.880815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.852410  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 06:49:34.871821  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.58416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.892188  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.779958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.892421  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 06:49:34.912264  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.912310  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.912328  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (969.648µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:34.912341  110531 httplog.go:90] GET /healthz: (1.322803ms) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:34.922105  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:34.922134  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:34.922167  110531 httplog.go:90] GET /healthz: (833.987µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.932402  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.20279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.932584  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 06:49:34.951308  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (984.344µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.972419  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:34.972661  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 06:49:34.992637  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.008975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.013911  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.944356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.014069  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.014097  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.014121  110531 httplog.go:90] GET /healthz: (2.492743ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.014420  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 06:49:35.022034  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.022064  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.022089  110531 httplog.go:90] GET /healthz: (707.365µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.031197  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (989.593µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.051660  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.47073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.052171  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 06:49:35.071844  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.63612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.091615  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.42645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.092006  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 06:49:35.111539  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.111569  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.111602  110531 httplog.go:90] GET /healthz: (1.352935ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.111905  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.685844ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.126921  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.126948  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.126990  110531 httplog.go:90] GET /healthz: (1.653626ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.131943  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.460346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.132363  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 06:49:35.151232  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.034737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.172467  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.786237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.172680  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 06:49:35.191325  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.131758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.211359  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.211385  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.211415  110531 httplog.go:90] GET /healthz: (1.12172ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.212155  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.908187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.212362  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 06:49:35.222345  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.222370  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.222405  110531 httplog.go:90] GET /healthz: (1.056166ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.230948  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (748.387µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.251669  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.49736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.251895  110531 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 06:49:35.271643  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.35084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.274558  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.21216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.291744  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.55816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.291929  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 06:49:35.311499  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.234372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.311815  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.311855  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.311901  110531 httplog.go:90] GET /healthz: (1.63417ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.313462  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.459279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.322105  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.322136  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.322168  110531 httplog.go:90] GET /healthz: (802.093µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.333320  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.133605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.333500  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 06:49:35.351296  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.027219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.353105  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.276063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.373129  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.957106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.373482  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 06:49:35.403005  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (12.663921ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.404808  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.332456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.410996  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.411097  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.411135  110531 httplog.go:90] GET /healthz: (869.508µs) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.411624  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.388429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.411834  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 06:49:35.422257  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.422287  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.422323  110531 httplog.go:90] GET /healthz: (885.467µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.431082  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (878.228µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.432555  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.127577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.451794  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.531993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.452007  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 06:49:35.471300  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.135567ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.473138  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.419982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.491753  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.503704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.491963  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 06:49:35.511694  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.511725  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.511765  110531 httplog.go:90] GET /healthz: (1.486166ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.511837  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.673265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.513707  110531 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.40632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.522194  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.522231  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.522265  110531 httplog.go:90] GET /healthz: (747.877µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.531802  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.606232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.531981  110531 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 06:49:35.551320  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (949.143µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.553351  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.24677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.571872  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.681517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.572666  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 06:49:35.591754  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.521545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.594377  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.096284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.613646  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.483328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.613911  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 06:49:35.616626  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.616654  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.616685  110531 httplog.go:90] GET /healthz: (4.465781ms) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.622474  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.622497  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.622529  110531 httplog.go:90] GET /healthz: (1.194822ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.631534  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.371566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.634105  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.222834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.652189  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.916718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.652398  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 06:49:35.671392  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.224505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.672807  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.037777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.692064  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.816155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.692406  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 06:49:35.711048  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.711074  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.711124  110531 httplog.go:90] GET /healthz: (836.629µs) 0 [Go-http-client/1.1 127.0.0.1:38206]
I0814 06:49:35.713874  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.059464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.716289  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.751495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.722693  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.722728  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.722758  110531 httplog.go:90] GET /healthz: (1.39039ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.734209  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.884347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.734469  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 06:49:35.752650  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (2.023505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.754631  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.500683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.772715  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.524508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.772910  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 06:49:35.791835  110531 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.517363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.793919  110531 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.547033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.811219  110531 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:49:35.811248  110531 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:49:35.811280  110531 httplog.go:90] GET /healthz: (938.97µs) 0 [Go-http-client/1.1 127.0.0.1:38066]
I0814 06:49:35.813183  110531 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.858651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.813413  110531 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 06:49:35.822142  110531 httplog.go:90] GET /healthz: (796.204µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.823947  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.507955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.827253  110531 httplog.go:90] POST /api/v1/namespaces: (1.38438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.828709  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.103758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.832498  110531 httplog.go:90] POST /api/v1/namespaces/default/services: (3.424686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.837700  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.593105ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.842575  110531 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (4.430401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.914508  110531 httplog.go:90] GET /healthz: (4.117035ms) 200 [Go-http-client/1.1 127.0.0.1:38206]
W0814 06:49:35.915310  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915338  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915357  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915368  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915378  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915388  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915403  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915413  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915430  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915497  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:49:35.915509  110531 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 06:49:35.915536  110531 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 06:49:35.915546  110531 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 06:49:35.916109  110531 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.916134  110531 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.916509  110531 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.916523  110531 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.916801  110531 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.916814  110531 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.917116  110531 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (712.05µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.917313  110531 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.917329  110531 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.917617  110531 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (595.223µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:49:35.917688  110531 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.917703  110531 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.918161  110531 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (350.441µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38456]
I0814 06:49:35.918458  110531 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (386.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38206]
I0814 06:49:35.918865  110531 get.go:250] Starting watch for /api/v1/services, rv=29340 labels= fields= timeout=8m48s
I0814 06:49:35.919112  110531 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=28867 labels= fields= timeout=5m58s
I0814 06:49:35.919200  110531 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (458.76µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38460]
I0814 06:49:35.919302  110531 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=28864 labels= fields= timeout=7m7s
I0814 06:49:35.919762  110531 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.919784  110531 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.919884  110531 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.919898  110531 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.919920  110531 get.go:250] Starting watch for /api/v1/nodes, rv=28864 labels= fields= timeout=6m2s
I0814 06:49:35.920316  110531 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.920329  110531 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.920453  110531 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (404.095µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38456]
I0814 06:49:35.920647  110531 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.920659  110531 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.920749  110531 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (371.843µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38466]
I0814 06:49:35.920982  110531 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.920993  110531 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.921247  110531 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=28864 labels= fields= timeout=5m13s
I0814 06:49:35.921386  110531 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (444.358µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38456]
I0814 06:49:35.921900  110531 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (390.757µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38472]
I0814 06:49:35.921984  110531 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28864 labels= fields= timeout=9m7s
I0814 06:49:35.922081  110531 get.go:250] Starting watch for /api/v1/pods, rv=28864 labels= fields= timeout=7m58s
I0814 06:49:35.922386  110531 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (386.395µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38470]
I0814 06:49:35.922502  110531 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.922521  110531 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 06:49:35.922571  110531 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=28865 labels= fields= timeout=5m39s
I0814 06:49:35.922965  110531 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=28864 labels= fields= timeout=5m21s
I0814 06:49:35.923415  110531 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (689.721µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38474]
I0814 06:49:35.923989  110531 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28866 labels= fields= timeout=6m27s
I0814 06:49:35.924405  110531 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=28867 labels= fields= timeout=8m12s
I0814 06:49:36.015981  110531 shared_informer.go:211] caches populated
I0814 06:49:36.116268  110531 shared_informer.go:211] caches populated
I0814 06:49:36.216470  110531 shared_informer.go:211] caches populated
I0814 06:49:36.316695  110531 shared_informer.go:211] caches populated
I0814 06:49:36.416929  110531 shared_informer.go:211] caches populated
I0814 06:49:36.517238  110531 shared_informer.go:211] caches populated
I0814 06:49:36.617363  110531 shared_informer.go:211] caches populated
I0814 06:49:36.717582  110531 shared_informer.go:211] caches populated
I0814 06:49:36.817772  110531 shared_informer.go:211] caches populated
I0814 06:49:36.917759  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:36.917962  110531 shared_informer.go:211] caches populated
I0814 06:49:36.919730  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:36.921104  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:36.921780  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:36.922440  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:36.922768  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:36.923921  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.018134  110531 shared_informer.go:211] caches populated
I0814 06:49:37.118334  110531 shared_informer.go:211] caches populated
I0814 06:49:37.120980  110531 httplog.go:90] POST /api/v1/nodes: (1.921855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:37.121258  110531 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 06:49:37.123745  110531 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods: (2.165928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:37.123831  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/waiting-pod
I0814 06:49:37.123848  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/waiting-pod
I0814 06:49:37.123967  110531 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/waiting-pod", node "test-node-0"
I0814 06:49:37.123978  110531 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 06:49:37.124040  110531 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 06:49:37.128310  110531 factory.go:615] Attempting to bind signalling-pod to test-node-0
I0814 06:49:37.128425  110531 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 06:49:37.128748  110531 scheduler.go:447] Failed to bind pod: permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod
E0814 06:49:37.128803  110531 scheduler.go:449] scheduler cache ForgetPod failed: pod acf7fca1-6853-40a9-9410-b20af2397c7e wasn't assumed so cannot be forgotten
E0814 06:49:37.128822  110531 scheduler.go:605] error binding pod: Post http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod/binding: dial tcp 127.0.0.1:38025: connect: connection refused
E0814 06:49:37.128846  110531 factory.go:566] Error scheduling permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod: Post http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod/binding: dial tcp 127.0.0.1:38025: connect: connection refused; retrying
I0814 06:49:37.128873  110531 factory.go:624] Updating pod condition for permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 06:49:37.129256  110531 scheduler.go:280] Error updating the condition of the pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod: Put http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod/status: dial tcp 127.0.0.1:38025: connect: connection refused
E0814 06:49:37.129400  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
E0814 06:49:37.129650  110531 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:38025/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/events: dial tcp 127.0.0.1:38025: connect: connection refused' (may retry after sleeping)
I0814 06:49:37.130711  110531 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/waiting-pod/binding: (2.020951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:37.130924  110531 scheduler.go:614] pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 06:49:37.132655  110531 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/events: (1.435537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
E0814 06:49:37.330121  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
E0814 06:49:37.730628  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:49:37.917918  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.919909  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.921264  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.921891  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.922603  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.922893  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:37.924149  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:49:38.531310  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:49:38.918108  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:38.920146  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:38.921454  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:38.922118  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:38.922730  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:38.923081  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:38.924313  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.918303  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.920333  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.921671  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.922284  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.922870  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.923398  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:39.924470  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:49:40.131858  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:49:40.918554  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:40.920586  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:40.921802  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:40.922446  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:40.923188  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:40.923516  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:40.924613  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.918742  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.920779  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.921973  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.922605  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.923310  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.924274  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:41.924791  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.918921  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.920930  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.922193  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.922765  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.923408  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.924338  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:42.925156  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:49:43.332432  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:49:43.919097  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:43.921044  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:43.922251  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:43.922924  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:43.923562  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:43.924863  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:43.925420  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.919277  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.921206  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.922403  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.923065  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.923702  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.925097  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:44.925531  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.824100  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.38224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:45.825641  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.136759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:45.827316  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.162705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:45.919445  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.921380  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.922546  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.923320  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.923842  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.925249  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:45.925683  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.919610  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.921533  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.922739  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.923496  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.924077  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.925430  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:46.925830  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.919849  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.921788  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.922873  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.923653  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.924479  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.925548  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:47.925916  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.920095  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.921953  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.923080  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.923870  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.924600  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.925762  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:48.926122  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:49:49.286380  110531 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:38025/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/events: dial tcp 127.0.0.1:38025: connect: connection refused' (may retry after sleeping)
E0814 06:49:49.732987  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:49:49.920269  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:49.922154  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:49.923229  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:49.924057  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:49.924757  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:49.925873  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:49.926232  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.920486  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.922330  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.923352  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.924273  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.924891  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.925981  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:50.926336  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.920682  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.922483  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.923504  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.924436  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.925063  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.926462  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:51.926474  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.920828  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.922634  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.923652  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.924555  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.925210  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.926573  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:52.926601  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.921085  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.922814  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.923838  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.924722  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.925366  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.926733  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:53.926758  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.921297  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.922998  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.923985  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.924863  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.925537  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.926870  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:54.926899  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.824618  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.430426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:55.826939  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.907078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:55.828620  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.162225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:49:55.921505  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.923233  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.924179  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.925040  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.925677  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.927079  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:55.927112  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.921712  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.923475  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.924328  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.925239  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.925869  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.927232  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:56.927290  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.921883  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.923646  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.924447  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.925448  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.926286  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.927357  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:57.927421  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.921998  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.923771  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.924637  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.925600  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.926420  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.927651  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:58.927660  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:49:59.373950  110531 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:38025/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/events: dial tcp 127.0.0.1:38025: connect: connection refused' (may retry after sleeping)
I0814 06:49:59.922256  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:59.923951  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:59.924802  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:59.925747  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:59.926563  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:59.927753  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:49:59.927780  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.922479  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.924188  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.924872  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.925953  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.926705  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.927866  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:00.927873  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.922747  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.924455  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.924970  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.926170  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.926860  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.927961  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:01.928075  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:50:02.533910  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:50:02.922925  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:02.924618  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:02.925165  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:02.926316  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:02.926998  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:02.928097  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:02.928283  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.923213  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.924799  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.925338  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.926464  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.927176  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.928227  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:03.928441  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.923584  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.924977  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.925468  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.926622  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.927334  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.928435  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:04.928585  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.824871  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.541487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:05.826457  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.164392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:05.827865  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.01375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:05.923834  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.925233  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.925582  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.926812  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.927490  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.928595  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:05.928762  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.924044  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.925449  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.925781  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.926985  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.927758  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.928687  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:06.928900  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.128359  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:07.128412  110531 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods: (3.191683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:07.128429  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:07.128651  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:07.128726  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:07.131058  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.583457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:07.131543  110531 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod/status: (1.82669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0814 06:50:07.132814  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (903.452µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0814 06:50:07.133147  110531 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 06:50:07.133436  110531 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/events: (2.798933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.135029  110531 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod/status: (1.394493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0814 06:50:07.138048  110531 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/waiting-pod: (2.429594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.140031  110531 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/events: (1.394555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.230874  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.63445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.331217  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.871644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.430832  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.535485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.531000  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.754178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.630535  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.251444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.731495  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.91081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.830447  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.318078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.924517  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.925661  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.925994  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.927209  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.927352  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:07.927368  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:07.927542  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:07.927585  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:07.928384  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.929341  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.929370  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:07.929405  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.439129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:07.929405  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.55258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:07.929974  110531 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/events: (1.513049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42052]
I0814 06:50:07.931412  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.825993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42054]
I0814 06:50:08.031820  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.53862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.135743  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.954772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.231037  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.818561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.330873  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.637847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.431165  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.605014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.531127  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.97095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.630799  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.532942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.730857  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.641541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.830722  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.505944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.920002  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:08.920118  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:08.920282  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:08.920322  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:08.923686  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.952837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.924472  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.843572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:08.924700  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.926134  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.926309  110531 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/events/preemptor-pod.15bab757bd1e1c1c: (3.804741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42250]
I0814 06:50:08.926452  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.927402  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.927502  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:08.927520  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:08.927656  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:08.927694  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:08.928520  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.929185  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.174036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:08.929490  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.929526  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:08.930108  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.148601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:08.930327  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (985.495µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42252]
I0814 06:50:09.030755  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.531143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.130524  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.296394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.230810  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.466293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.332042  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.763393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.430874  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.636476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.531009  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.765593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.630638  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.381881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.731357  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.121128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
E0814 06:50:09.763804  110531 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:38025/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/events: dial tcp 127.0.0.1:38025: connect: connection refused' (may retry after sleeping)
I0814 06:50:09.830477  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.276671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:09.924852  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.926276  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.927233  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.927551  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.927674  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:09.927687  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:09.927816  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:09.927859  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:09.929396  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.929707  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.929731  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:09.931719  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.80564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:09.932347  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (3.868947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38754]
I0814 06:50:09.932642  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (3.592174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.030949  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.371902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.130718  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.461732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.230744  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.555275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.330786  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.532866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.430809  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.519234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.530672  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.446866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.630749  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.453495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.730820  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.613138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.834330  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.570045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.925191  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.926455  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.927960  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.928203  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:10.928224  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:10.928367  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:10.928423  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:10.929190  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.929598  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.929803  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.929823  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:10.930444  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.742632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:10.931255  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.433534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:10.931338  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.67574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43006]
I0814 06:50:11.030862  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.608868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.131377  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.850097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.230618  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.331288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.331194  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.010053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.431463  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.29054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.532005  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.711821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.631342  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.954667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.731744  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.534196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.830506  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.23698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.925359  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.926602  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.928152  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.928312  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:11.928326  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:11.928495  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:11.928541  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:11.929328  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.929934  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.929966  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.929982  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:11.930674  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.903382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:11.930675  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.780446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:11.931835  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.904491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0814 06:50:12.030863  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.622054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.130807  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.529555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.231602  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.060401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.337090  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (7.742615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.433195  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (3.976379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.530718  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.510952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.630967  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.641432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.731181  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.822508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.830639  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.404939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.925542  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.927877  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.928334  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.928549  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:12.928581  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:12.928725  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:12.928767  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:12.929562  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.930236  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.930268  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.930272  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:12.931543  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.133842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:12.931552  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.680054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43438]
I0814 06:50:12.931834  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.527833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.030708  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.452313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.132452  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.941959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.230634  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.356707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.331111  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.777485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.430630  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.402162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.530451  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.249647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.630750  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.498046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.730683  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.41228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.830560  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.362372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.925674  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:13.928106  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:13.928529  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:13.928676  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:13.928693  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:13.928863  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:13.928913  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:13.929804  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:13.930422  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:13.930598  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.451416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:13.930623  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:13.930598  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.475457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:13.930830  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (951.958µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0814 06:50:13.930844  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.030978  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.663019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.131830  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.554289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.231710  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.04587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.330712  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.426041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.430966  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.702674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.530753  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.501484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.631073  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.693509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.731240  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.868722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.830954  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.638729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.925877  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.928288  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.928716  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.928889  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:14.928905  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:14.929047  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:14.929086  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:14.929945  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.930588  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.930934  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.657158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:14.930961  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.930935  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.711422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:14.931143  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:14.931681  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.860794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0814 06:50:15.030693  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.490814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.131128  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.827474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.230930  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.741748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.331974  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.617997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.430693  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.459333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.530816  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.579298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.630709  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.43299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.730795  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.495426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.825785  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.297547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.827365  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.054437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.828941  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.13026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.830354  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.229687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:15.926135  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.928386  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.928910  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.929092  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:15.929108  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:15.929271  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:15.929329  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:15.930165  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.930740  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.931001  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.44404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:15.931117  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.931390  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:15.931648  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.803938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:15.932295  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (3.010698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0814 06:50:16.030833  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.616256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.130527  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.313763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.230639  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.430007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.330722  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.465752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.430636  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.439679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.530951  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.770607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.630582  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.356971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.730799  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.500009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.830565  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.325704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.926320  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.928578  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.929078  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.929192  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:16.929205  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:16.929388  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:16.929437  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:16.930619  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.930873  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.931290  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.931497  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:16.931845  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.502745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:16.931911  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.617246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:16.932171  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.887768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42592]
I0814 06:50:17.030449  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.283992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.130600  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.331722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.230693  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.454306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.330771  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.580241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.431428  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.683568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.530514  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.401316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.630660  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.444933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.730737  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.534019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.830659  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.47557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.926512  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:17.928697  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:17.929592  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:17.929730  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:17.929746  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:17.929888  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:17.929922  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:17.930931  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:17.931039  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:17.931887  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.677405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:17.931889  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.75453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:17.932144  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (996.518µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44038]
I0814 06:50:17.932472  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:17.932485  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.030941  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.707586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.132650  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.871573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.231689  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.505226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.330809  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.603132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.431460  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.184997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.531287  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.03055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.630839  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.588936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.730628  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.423996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.830543  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.32755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.926703  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.928835  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.929760  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.929887  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:18.929903  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:18.930008  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:18.930162  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:18.931160  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.931196  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.931539  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.429961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:18.932615  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:18.932721  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.679301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44050]
I0814 06:50:18.932767  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.107404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:18.932881  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.030683  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.400033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.133683  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.895821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.230666  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.393402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.330968  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.534489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.431100  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.89503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.530706  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.466449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.630842  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.638651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.730827  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.486022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.831957  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.673363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.926870  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.929111  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.929913  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.930128  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:19.930151  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:19.930313  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:19.930369  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:19.931297  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.931356  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.931724  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.532114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:19.931728  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.017137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:19.932384  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.450872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44060]
I0814 06:50:19.932704  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:19.932985  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.030571  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.287239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.135268  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.856435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.231005  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.7911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.330811  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.573247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.430803  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.559878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.530779  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.559024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.630543  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.320149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.730836  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.605701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.831258  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.86749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.926961  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.929279  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.930092  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.930266  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:20.930295  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:20.930475  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:20.930532  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:20.931004  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.706959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:20.931409  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.931503  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.932313  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.574602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:20.932636  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.62158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44080]
I0814 06:50:20.932813  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:20.933160  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.030767  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.354764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.130398  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.225436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.230738  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.521746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.330988  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.627394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.430856  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.586158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.530864  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.583376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.630768  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.480925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
E0814 06:50:21.709560  110531 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:38025/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/events: dial tcp 127.0.0.1:38025: connect: connection refused' (may retry after sleeping)
I0814 06:50:21.730941  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.79714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.830521  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.3728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.927232  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.929418  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.930260  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.930399  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:21.930424  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:21.930575  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:21.930639  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:21.930768  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.563192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.931538  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.931800  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.932328  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.132642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:21.932516  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.220754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0814 06:50:21.932942  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:21.933292  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.030980  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.793024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.130612  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.366414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.230643  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.363611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.331211  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.823455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.431209  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.029496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.530970  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.665913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.630638  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.305721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.730691  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.553637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.830674  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.406377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.927392  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.929722  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.930441  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.930607  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:22.930627  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:22.930828  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:22.930893  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:22.931687  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.931942  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.932477  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.076733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:22.932575  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.235404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:22.932973  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (3.765486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0814 06:50:22.933325  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:22.933459  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.030615  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.428295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.130611  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.367848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.230822  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.549558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.330694  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.379768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.430702  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.437399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.530379  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.263038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.630648  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.439891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.730731  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.473568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.831183  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.91314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.927585  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.929843  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.930607  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.930640  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.393786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.930775  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:23.930796  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:23.930911  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:23.930958  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:23.932115  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.932294  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.932483  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.258331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:23.932483  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.235029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:23.933448  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:23.933570  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.035980  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (6.691862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.130738  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.40097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.230487  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.209083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.330590  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.301091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.430822  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.571028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.530658  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.505396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.630661  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.400873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.731082  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.769849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.830491  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.274962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.927691  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.930220  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.930800  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.5671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.931119  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.931250  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:24.931275  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:24.931378  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:24.931424  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:24.932274  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.932411  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.932743  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.11999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:24.932863  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.160685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:24.933640  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:24.933732  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.030594  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.326731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.131144  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.934747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.230817  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.626658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.331387  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.110815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.431082  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.867638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.530746  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.535654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.630885  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.65238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.730801  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.518576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.825883  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.326893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.827479  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.136227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.828802  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (991.236µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.830219  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.142126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:25.927939  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.930361  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.930972  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.72236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:25.931326  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.931467  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:25.931484  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:25.931608  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:25.931648  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:25.932427  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.932823  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.933230  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.410069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:25.933249  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.271315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:25.933757  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:25.933857  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.031140  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.845931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.130683  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.389642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.230964  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.741027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.330711  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.491376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.430616  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.337926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.530791  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.563111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.630499  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.313112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.730790  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.587445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.830717  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.456311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.928164  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.930718  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.931098  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.675846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.931482  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.931600  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:26.931621  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:26.931740  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:26.931790  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:26.932574  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.933099  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.933595  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.22168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:26.933831  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.453173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:26.933886  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:26.933989  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.030929  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.571253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.130674  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.533191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.230679  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.420793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.330864  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.516041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.430441  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.344122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.530655  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.5166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.630795  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.574924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.730824  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.619983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.830888  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.675148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.928355  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.930862  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.931050  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.901187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.931652  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.931770  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:27.931789  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:27.931898  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:27.931932  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:27.932812  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.933236  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.933743  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.532906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:27.933902  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.542002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:27.934106  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:27.934107  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.031231  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.008076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.130757  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.465706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
E0814 06:50:28.134405  110531 factory.go:599] Error getting pod permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/signalling-pod for retry: Get http://127.0.0.1:38025/api/v1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/pods/signalling-pod: dial tcp 127.0.0.1:38025: connect: connection refused; retrying...
I0814 06:50:28.230638  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.385116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.330943  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.666124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.430988  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.655125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.530946  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.607895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.631068  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.819911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.731144  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.800578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.831245  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.889797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.928536  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.930975  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.931304  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.957005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.931804  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.931986  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:28.931997  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:28.932104  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:28.932138  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:28.932963  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.933409  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.933580  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.270817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:28.933702  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.24936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:28.934240  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:28.934265  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.030711  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.501198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.130819  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.47931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.230690  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.388435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.331279  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.785669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.430785  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.576617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.530705  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.478655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.632197  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.927131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.730784  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.577942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.830693  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.401472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.928721  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.930657  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.473136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.931170  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.931968  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.932184  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:29.932205  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:29.932348  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:29.932394  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:29.933148  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.933571  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.933701  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.138383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:29.934355  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.934381  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:29.934570  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.302613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.030981  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.706235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.130562  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.371391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.230705  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.368104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.330658  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.414282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.430726  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.529761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.530587  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.373003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.630818  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.531776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.732079  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.274621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.830497  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.278623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.928889  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:30.930978  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.621359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.931332  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:30.932132  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:30.932285  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:30.932314  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:30.932435  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:30.932480  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:30.933296  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:30.933794  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.115983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:30.934233  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:30.934397  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.595074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:30.934588  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:30.934610  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.030778  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.557141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.130728  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.44299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.230932  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.701789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.330646  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.413502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.430881  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.568294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.531082  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.767791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.630635  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.396698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.730873  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.605385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.830745  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.568313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.928988  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.931167  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.706051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.931483  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.932325  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.932638  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:31.932802  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:31.933077  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:31.933255  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:31.933441  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.934465  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.934578  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.015056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:31.934672  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.934696  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:31.935140  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.326185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.030691  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.334414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.130724  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.502703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.230689  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.509145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.331290  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.992617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.430775  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.570191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.530066  110531 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.381004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.530578  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.299842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:32.531522  110531 httplog.go:90] GET /api/v1/namespaces/kube-public: (975.688µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.532668  110531 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (765.912µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
E0814 06:50:32.599694  110531 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:38025/apis/events.k8s.io/v1beta1/namespaces/permit-plugin2b2a95dd-498c-47df-af60-8af269fbe8eb/events: dial tcp 127.0.0.1:38025: connect: connection refused' (may retry after sleeping)
I0814 06:50:32.631452  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.077705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.730851  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.596576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.830864  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.714224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.929197  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.931073  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.661414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.931613  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.932668  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.932828  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:32.932847  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:32.933009  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:32.933119  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:32.934136  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.934623  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.934822  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.934824  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:32.934951  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.395428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:32.936185  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.112789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.030629  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.424388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.130555  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.301356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.230551  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.301236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.330746  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.511013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.430617  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.343593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.530590  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.289547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.630853  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.619805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.731116  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.699531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.830969  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.806112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.929319  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.930729  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.496861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.931780  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.932845  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.933066  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:33.933090  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:33.933176  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:33.933217  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:33.934220  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.934841  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.935113  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (982.625µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:33.935126  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.935146  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:33.935653  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.916173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.030982  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.782257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.130359  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.198283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.230604  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.4544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.330629  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.390731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.430659  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.442591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.530677  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.497945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.630561  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.394354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.731305  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.879766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.830933  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.600244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.923713  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:34.923744  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:34.923966  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:34.924055  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:34.925604  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.311104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:34.925868  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.558362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.929423  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:34.930278  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.149624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.931888  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:34.932940  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:34.933166  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:34.933185  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:34.933359  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:34.933418  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:34.934376  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:34.934971  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:34.935232  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.57301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:34.935299  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.46153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:34.935246  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:34.935261  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.031094  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.979358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.130785  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.329215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.230897  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.607415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.330778  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.468184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.430614  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.378206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.530547  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.325592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.630660  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.444733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.730744  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.447609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.826078  110531 httplog.go:90] GET /api/v1/namespaces/default: (1.273878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.827758  110531 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.194004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.829453  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.338257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.830430  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.201295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:35.929582  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.930582  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.442926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:35.932082  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.933084  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.933253  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:35.933289  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:35.933398  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:35.933456  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:35.934517  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.935056  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.935119  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.390155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:35.935231  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.590941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:35.935548  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:35.935615  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.030879  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.589067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.130320  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.135408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.230707  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.446687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.330735  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.449493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.430535  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.314889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.530432  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.347658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.630755  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.553035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.730592  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.379423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.830895  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.674849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.926101  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:36.926136  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:36.926301  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:36.926395  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:36.928157  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.477272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.928877  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (2.066207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:36.929678  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.930153  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.150201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.932174  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.933272  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.933406  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:36.933422  110531 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:36.933511  110531 factory.go:550] Unable to schedule preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:50:36.933543  110531 factory.go:624] Updating pod condition for preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:50:36.934666  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.934754  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.027681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:36.934855  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.020029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:36.935186  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.935728  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:36.935753  110531 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:50:37.030472  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.210257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.140388  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (4.846651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.145819  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (4.205283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.156802  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/waiting-pod: (9.706159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.168945  110531 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/waiting-pod: (10.672265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.180379  110531 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:37.180436  110531 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/preemptor-pod
I0814 06:50:37.182400  110531 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (13.01836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.186236  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/waiting-pod: (1.966339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44322]
I0814 06:50:37.186722  110531 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/events: (4.114743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:37.204177  110531 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/pods/preemptor-pod: (1.06291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:37.204600  110531 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28864&timeout=5m21s&timeoutSeconds=321&watch=true: (1m1.281915503s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38470]
I0814 06:50:37.204626  110531 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28864&timeout=9m7s&timeoutSeconds=547&watch=true: (1m1.282870183s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38456]
E0814 06:50:37.204714  110531 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 06:50:37.204824  110531 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28867&timeout=5m58s&timeoutSeconds=358&watch=true: (1m1.286037147s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38464]
I0814 06:50:37.204838  110531 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28866&timeout=6m27s&timeoutSeconds=387&watch=true: (1m1.281008296s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38474]
I0814 06:50:37.204874  110531 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28867&timeout=8m12s&timeoutSeconds=492&watch=true: (1m1.280678535s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38476]
I0814 06:50:37.204902  110531 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28864&timeout=6m2s&timeoutSeconds=362&watch=true: (1m1.285271386s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38460]
I0814 06:50:37.204904  110531 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29340&timeout=8m48s&timeoutSeconds=528&watch=true: (1m1.286342378s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38066]
I0814 06:50:37.204902  110531 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=28865&timeout=5m39s&timeoutSeconds=339&watch=true: (1m1.28253662s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38472]
I0814 06:50:37.204960  110531 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=28864&timeout=7m7s&timeoutSeconds=427&watch=true: (1m1.285940791s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38458]
I0814 06:50:37.204972  110531 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28864&timeout=5m13s&timeoutSeconds=313&watch=true: (1m1.28400974s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38468]
I0814 06:50:37.205412  110531 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28864&timeout=7m58s&timeoutSeconds=478&watch=true: (1m1.283654765s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38466]
I0814 06:50:37.209478  110531 httplog.go:90] DELETE /api/v1/nodes: (4.970211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:37.209714  110531 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 06:50:37.211959  110531 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.827104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
I0814 06:50:37.214567  110531 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.078291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44334]
--- FAIL: TestPreemptWithPermitPlugin (64.83s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-064248.xml

Find preempt-with-permit-plugin06cd77ee-ec10-4241-9342-be662ba109e6/waiting-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 729 lines ...
W0814 06:37:41.503] I0814 06:37:41.294095   53182 node_lifecycle_controller.go:431] Controller will taint node by condition.
W0814 06:37:41.503] I0814 06:37:41.294113   53182 controllermanager.go:535] Started "nodelifecycle"
W0814 06:37:41.503] I0814 06:37:41.294252   53182 endpoints_controller.go:170] Starting endpoint controller
W0814 06:37:41.503] I0814 06:37:41.294284   53182 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
W0814 06:37:41.503] I0814 06:37:41.294367   53182 daemon_controller.go:267] Starting daemon sets controller
W0814 06:37:41.503] I0814 06:37:41.294387   53182 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
W0814 06:37:41.504] E0814 06:37:41.294392   53182 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 06:37:41.504] W0814 06:37:41.294404   53182 controllermanager.go:527] Skipping "service"
W0814 06:37:41.504] I0814 06:37:41.294411   53182 node_lifecycle_controller.go:455] Starting node controller
W0814 06:37:41.504] I0814 06:37:41.294418   53182 gc_controller.go:76] Starting GC controller
W0814 06:37:41.504] I0814 06:37:41.294432   53182 controller_utils.go:1029] Waiting for caches to sync for taint controller
W0814 06:37:41.504] I0814 06:37:41.294440   53182 controller_utils.go:1029] Waiting for caches to sync for GC controller
W0814 06:37:41.504] W0814 06:37:41.294412   53182 controllermanager.go:527] Skipping "root-ca-cert-publisher"
... skipping 44 lines ...
W0814 06:37:41.511] I0814 06:37:41.460610   53182 controllermanager.go:535] Started "job"
W0814 06:37:41.511] I0814 06:37:41.461369   53182 job_controller.go:143] Starting job controller
W0814 06:37:41.511] I0814 06:37:41.461409   53182 controller_utils.go:1029] Waiting for caches to sync for job controller
W0814 06:37:41.511] I0814 06:37:41.461727   53182 controllermanager.go:535] Started "horizontalpodautoscaling"
W0814 06:37:41.511] I0814 06:37:41.462212   53182 controllermanager.go:535] Started "disruption"
W0814 06:37:41.511] I0814 06:37:41.462458   53182 node_lifecycle_controller.go:77] Sending events to api server
W0814 06:37:41.511] E0814 06:37:41.462515   53182 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 06:37:41.511] W0814 06:37:41.462527   53182 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 06:37:41.512] I0814 06:37:41.462794   53182 horizontal.go:156] Starting HPA controller
W0814 06:37:41.512] I0814 06:37:41.462823   53182 controller_utils.go:1029] Waiting for caches to sync for HPA controller
W0814 06:37:41.512] I0814 06:37:41.462841   53182 disruption.go:333] Starting disruption controller
W0814 06:37:41.512] I0814 06:37:41.462871   53182 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0814 06:37:41.512] I0814 06:37:41.463404   53182 controllermanager.go:535] Started "replicaset"
... skipping 18 lines ...
W0814 06:37:41.773] I0814 06:37:41.771947   53182 deployment_controller.go:152] Starting deployment controller
W0814 06:37:41.774] I0814 06:37:41.771968   53182 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0814 06:37:41.774] W0814 06:37:41.772177   53182 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0814 06:37:41.774] I0814 06:37:41.773141   53182 controllermanager.go:535] Started "attachdetach"
W0814 06:37:41.775] I0814 06:37:41.773876   53182 attach_detach_controller.go:335] Starting attach detach controller
W0814 06:37:41.775] I0814 06:37:41.774111   53182 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
W0814 06:37:41.805] W0814 06:37:41.804847   53182 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 06:37:41.866] I0814 06:37:41.865632   53182 controller_utils.go:1036] Caches are synced for PV protection controller
W0814 06:37:41.893] I0814 06:37:41.892941   53182 controller_utils.go:1036] Caches are synced for TTL controller
W0814 06:37:41.962] I0814 06:37:41.961718   53182 controller_utils.go:1036] Caches are synced for job controller
W0814 06:37:41.963] I0814 06:37:41.963074   53182 controller_utils.go:1036] Caches are synced for HPA controller
W0814 06:37:41.995] I0814 06:37:41.994524   53182 controller_utils.go:1036] Caches are synced for endpoint controller
W0814 06:37:41.995] I0814 06:37:41.994546   53182 controller_utils.go:1036] Caches are synced for daemon sets controller
... skipping 2 lines ...
W0814 06:37:41.996] I0814 06:37:41.994786   53182 node_lifecycle_controller.go:1189] Initializing eviction metric for zone: 
W0814 06:37:41.996] I0814 06:37:41.994972   53182 node_lifecycle_controller.go:1039] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0814 06:37:41.996] I0814 06:37:41.995002   53182 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"7206be25-0c4e-4b55-8885-cfd74898ff8b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0814 06:37:41.996] I0814 06:37:41.994983   53182 taint_manager.go:186] Starting NoExecuteTaintManager
W0814 06:37:42.007] I0814 06:37:42.006820   53182 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0814 06:37:42.107] I0814 06:37:42.106877   53182 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 06:37:42.120] E0814 06:37:42.119470   53182 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0814 06:37:42.128] E0814 06:37:42.127964   53182 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 06:37:42.163] I0814 06:37:42.163133   53182 controller_utils.go:1036] Caches are synced for disruption controller
W0814 06:37:42.164] I0814 06:37:42.163414   53182 disruption.go:341] Sending events to api server.
W0814 06:37:42.164] I0814 06:37:42.163925   53182 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0814 06:37:42.172] I0814 06:37:42.172277   53182 controller_utils.go:1036] Caches are synced for deployment controller
I0814 06:37:42.273] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0814 06:37:42.274] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   35s
... skipping 91 lines ...
I0814 06:37:45.436] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:37:45.438] +++ command: run_RESTMapper_evaluation_tests
I0814 06:37:45.453] +++ [0814 06:37:45] Creating namespace namespace-1565764665-8806
I0814 06:37:45.532] namespace/namespace-1565764665-8806 created
I0814 06:37:45.608] Context "test" modified.
I0814 06:37:45.616] +++ [0814 06:37:45] Testing RESTMapper
I0814 06:37:45.720] +++ [0814 06:37:45] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 06:37:45.735] +++ exit code: 0
I0814 06:37:45.849] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 06:37:45.849] bindings                                                                      true         Binding
I0814 06:37:45.849] componentstatuses                 cs                                          false        ComponentStatus
I0814 06:37:45.850] configmaps                        cm                                          true         ConfigMap
I0814 06:37:45.850] endpoints                         ep                                          true         Endpoints
... skipping 664 lines ...
I0814 06:38:05.528] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 06:38:05.617] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 06:38:05.686] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 06:38:05.783] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 06:38:05.943] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:06.139] (Bpod/env-test-pod created
W0814 06:38:06.240] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 06:38:06.241] error: setting 'all' parameter but found a non empty selector. 
W0814 06:38:06.241] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 06:38:06.241] I0814 06:38:05.186461   49700 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0814 06:38:06.241] error: min-available and max-unavailable cannot be both specified
I0814 06:38:06.342] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 06:38:06.342] Name:         env-test-pod
I0814 06:38:06.343] Namespace:    test-kubectl-describe-pod
I0814 06:38:06.343] Priority:     0
I0814 06:38:06.343] Node:         <none>
I0814 06:38:06.343] Labels:       <none>
... skipping 173 lines ...
I0814 06:38:19.984] (Bpod/valid-pod patched
I0814 06:38:20.091] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 06:38:20.177] (Bpod/valid-pod patched
I0814 06:38:20.284] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 06:38:20.471] (Bpod/valid-pod patched
I0814 06:38:20.584] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 06:38:20.785] (B+++ [0814 06:38:20] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 06:38:21.044] pod "valid-pod" deleted
I0814 06:38:21.056] pod/valid-pod replaced
I0814 06:38:21.164] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 06:38:21.333] (BSuccessful
I0814 06:38:21.334] message:error: --grace-period must have --force specified
I0814 06:38:21.334] has:\-\-grace-period must have \-\-force specified
I0814 06:38:21.515] Successful
I0814 06:38:21.515] message:error: --timeout must have --force specified
I0814 06:38:21.515] has:\-\-timeout must have \-\-force specified
W0814 06:38:21.666] W0814 06:38:21.665345   53182 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 06:38:21.766] node/node-v1-test created
I0814 06:38:21.839] node/node-v1-test replaced
I0814 06:38:21.942] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 06:38:22.031] (Bnode "node-v1-test" deleted
W0814 06:38:22.132] I0814 06:38:21.997607   53182 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"7b75ed86-4a87-400f-8ebd-b4b5c19f7625", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
I0814 06:38:22.232] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 27 lines ...
I0814 06:38:24.001] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 06:38:24.103] (Bpod/valid-pod labeled
W0814 06:38:24.204] Edit cancelled, no changes made.
W0814 06:38:24.204] Edit cancelled, no changes made.
W0814 06:38:24.204] Edit cancelled, no changes made.
W0814 06:38:24.204] Edit cancelled, no changes made.
W0814 06:38:24.204] error: 'name' already has a value (valid-pod), and --overwrite is false
I0814 06:38:24.305] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0814 06:38:24.321] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:38:24.412] (Bpod "valid-pod" force deleted
W0814 06:38:24.513] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 06:38:24.613] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:24.614] (B+++ [0814 06:38:24] Creating namespace namespace-1565764704-13043
... skipping 83 lines ...
I0814 06:38:31.583] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 06:38:31.586] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:38:31.588] +++ command: run_kubectl_create_error_tests
I0814 06:38:31.602] +++ [0814 06:38:31] Creating namespace namespace-1565764711-28377
I0814 06:38:31.680] namespace/namespace-1565764711-28377 created
I0814 06:38:31.750] Context "test" modified.
I0814 06:38:31.758] +++ [0814 06:38:31] Testing kubectl create with error
W0814 06:38:31.858] Error: must specify one of -f and -k
W0814 06:38:31.859] 
W0814 06:38:31.859] Create a resource from a file or from stdin.
W0814 06:38:31.859] 
W0814 06:38:31.860]  JSON and YAML formats are accepted.
W0814 06:38:31.860] 
W0814 06:38:31.860] Examples:
... skipping 41 lines ...
W0814 06:38:31.865] 
W0814 06:38:31.865] Usage:
W0814 06:38:31.865]   kubectl create -f FILENAME [options]
W0814 06:38:31.865] 
W0814 06:38:31.865] Use "kubectl <command> --help" for more information about a given command.
W0814 06:38:31.865] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 06:38:31.995] +++ [0814 06:38:31] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:38:32.096] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 06:38:32.096] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 06:38:32.197] +++ exit code: 0
I0814 06:38:32.201] Recording: run_kubectl_apply_tests
I0814 06:38:32.202] Running command: run_kubectl_apply_tests
I0814 06:38:32.224] 
... skipping 19 lines ...
W0814 06:38:34.251] I0814 06:38:34.251132   49700 client.go:354] parsed scheme: ""
W0814 06:38:34.252] I0814 06:38:34.251168   49700 client.go:354] scheme "" not registered, fallback to default scheme
W0814 06:38:34.252] I0814 06:38:34.251203   49700 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 06:38:34.252] I0814 06:38:34.251242   49700 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 06:38:34.254] I0814 06:38:34.253976   49700 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 06:38:34.256] I0814 06:38:34.256131   49700 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0814 06:38:34.343] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 06:38:34.443] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0814 06:38:34.444] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 06:38:34.459] +++ exit code: 0
I0814 06:38:34.496] Recording: run_kubectl_run_tests
I0814 06:38:34.497] Running command: run_kubectl_run_tests
I0814 06:38:34.519] 
... skipping 95 lines ...
I0814 06:38:36.969] Context "test" modified.
I0814 06:38:36.976] +++ [0814 06:38:36] Testing kubectl create filter
I0814 06:38:37.058] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:37.248] (Bpod/selector-test-pod created
I0814 06:38:37.352] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 06:38:37.433] (BSuccessful
I0814 06:38:37.434] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 06:38:37.434] has:pods "selector-test-pod-dont-apply" not found
I0814 06:38:37.505] pod "selector-test-pod" deleted
I0814 06:38:37.525] +++ exit code: 0
I0814 06:38:37.561] Recording: run_kubectl_apply_deployments_tests
I0814 06:38:37.561] Running command: run_kubectl_apply_deployments_tests
I0814 06:38:37.585] 
... skipping 27 lines ...
I0814 06:38:39.322] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:39.406] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:39.492] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:39.651] (Bdeployment.apps/nginx created
I0814 06:38:39.744] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 06:38:43.983] (BSuccessful
I0814 06:38:43.984] message:Error from server (Conflict): error when applying patch:
I0814 06:38:43.984] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565764717-27965\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 06:38:43.984] to:
I0814 06:38:43.985] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 06:38:43.985] Name: "nginx", Namespace: "namespace-1565764717-27965"
I0814 06:38:43.988] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565764717-27965\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T06:38:39Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T06:38:39Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T06:38:39Z"]] "name":"nginx" "namespace":"namespace-1565764717-27965" "resourceVersion":"591" "selfLink":"/apis/apps/v1/namespaces/namespace-1565764717-27965/deployments/nginx" "uid":"c88ed108-3c0a-40fe-9a34-db02a5b158ce"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T06:38:39Z" "lastUpdateTime":"2019-08-14T06:38:39Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T06:38:39Z" "lastUpdateTime":"2019-08-14T06:38:39Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 06:38:43.988] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 06:38:43.988] has:Error from server (Conflict)
W0814 06:38:44.089] I0814 06:38:39.654457   53182 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565764717-27965", Name:"nginx", UID:"c88ed108-3c0a-40fe-9a34-db02a5b158ce", APIVersion:"apps/v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 06:38:44.089] I0814 06:38:39.657903   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764717-27965", Name:"nginx-7dbc4d9f", UID:"915e7eff-e5d0-410d-85cf-f29c0389ddc7", APIVersion:"apps/v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-wcbnt
W0814 06:38:44.090] I0814 06:38:39.660991   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764717-27965", Name:"nginx-7dbc4d9f", UID:"915e7eff-e5d0-410d-85cf-f29c0389ddc7", APIVersion:"apps/v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-ffc7h
W0814 06:38:44.090] I0814 06:38:39.661168   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764717-27965", Name:"nginx-7dbc4d9f", UID:"915e7eff-e5d0-410d-85cf-f29c0389ddc7", APIVersion:"apps/v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-cltfg
W0814 06:38:45.990] I0814 06:38:45.989602   53182 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565764709-10272
I0814 06:38:49.250] deployment.apps/nginx configured
... skipping 173 lines ...
I0814 06:38:56.524] +++ [0814 06:38:56] Creating namespace namespace-1565764736-18859
I0814 06:38:56.604] namespace/namespace-1565764736-18859 created
I0814 06:38:56.680] Context "test" modified.
I0814 06:38:56.688] +++ [0814 06:38:56] Testing kubectl get
I0814 06:38:56.784] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:56.870] (BSuccessful
I0814 06:38:56.870] message:Error from server (NotFound): pods "abc" not found
I0814 06:38:56.870] has:pods "abc" not found
I0814 06:38:56.961] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:57.040] (BSuccessful
I0814 06:38:57.041] message:Error from server (NotFound): pods "abc" not found
I0814 06:38:57.041] has:pods "abc" not found
I0814 06:38:57.128] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:57.208] (BSuccessful
I0814 06:38:57.209] message:{
I0814 06:38:57.209]     "apiVersion": "v1",
I0814 06:38:57.209]     "items": [],
... skipping 23 lines ...
I0814 06:38:57.540] has not:No resources found
I0814 06:38:57.632] Successful
I0814 06:38:57.632] message:NAME
I0814 06:38:57.632] has not:No resources found
I0814 06:38:57.725] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:57.822] (BSuccessful
I0814 06:38:57.823] message:error: the server doesn't have a resource type "foobar"
I0814 06:38:57.823] has not:No resources found
I0814 06:38:57.901] Successful
I0814 06:38:57.902] message:No resources found in namespace-1565764736-18859 namespace.
I0814 06:38:57.902] has:No resources found
I0814 06:38:57.980] Successful
I0814 06:38:57.981] message:
I0814 06:38:57.981] has not:No resources found
I0814 06:38:58.058] Successful
I0814 06:38:58.059] message:No resources found in namespace-1565764736-18859 namespace.
I0814 06:38:58.059] has:No resources found
I0814 06:38:58.153] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:38:58.240] (BSuccessful
I0814 06:38:58.240] message:Error from server (NotFound): pods "abc" not found
I0814 06:38:58.241] has:pods "abc" not found
I0814 06:38:58.242] FAIL!
I0814 06:38:58.243] message:Error from server (NotFound): pods "abc" not found
I0814 06:38:58.243] has not:List
I0814 06:38:58.243] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 06:38:58.352] Successful
I0814 06:38:58.353] message:I0814 06:38:58.308777   63712 loader.go:375] Config loaded from file:  /tmp/tmp.yQUpioMIk3/.kube/config
I0814 06:38:58.353] I0814 06:38:58.310518   63712 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 06:38:58.353] I0814 06:38:58.329840   63712 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 06:39:03.868] Successful
I0814 06:39:03.869] message:NAME    DATA   AGE
I0814 06:39:03.869] one     0      0s
I0814 06:39:03.869] three   0      0s
I0814 06:39:03.869] two     0      0s
I0814 06:39:03.869] STATUS    REASON          MESSAGE
I0814 06:39:03.869] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 06:39:03.870] has not:watch is only supported on individual resources
I0814 06:39:04.958] Successful
I0814 06:39:04.958] message:STATUS    REASON          MESSAGE
I0814 06:39:04.959] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 06:39:04.959] has not:watch is only supported on individual resources
I0814 06:39:04.966] +++ [0814 06:39:04] Creating namespace namespace-1565764744-11319
I0814 06:39:05.042] namespace/namespace-1565764744-11319 created
I0814 06:39:05.115] Context "test" modified.
I0814 06:39:05.212] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:39:05.365] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 06:39:05.459] }
I0814 06:39:05.551] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:39:05.807] (B<no value>Successful
I0814 06:39:05.808] message:valid-pod:
I0814 06:39:05.808] has:valid-pod:
I0814 06:39:05.892] Successful
I0814 06:39:05.893] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 06:39:05.893] 	template was:
I0814 06:39:05.893] 		{.missing}
I0814 06:39:05.894] 	object given to jsonpath engine was:
I0814 06:39:05.896] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T06:39:05Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T06:39:05Z"}}, "name":"valid-pod", "namespace":"namespace-1565764744-11319", "resourceVersion":"692", "selfLink":"/api/v1/namespaces/namespace-1565764744-11319/pods/valid-pod", "uid":"f7f15c14-e4af-4b55-ae2e-5d7d763d883c"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 06:39:05.896] has:missing is not found
I0814 06:39:05.987] Successful
I0814 06:39:05.988] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 06:39:05.988] 	template was:
I0814 06:39:05.988] 		{{.missing}}
I0814 06:39:05.988] 	raw data was:
I0814 06:39:05.989] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T06:39:05Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T06:39:05Z"}],"name":"valid-pod","namespace":"namespace-1565764744-11319","resourceVersion":"692","selfLink":"/api/v1/namespaces/namespace-1565764744-11319/pods/valid-pod","uid":"f7f15c14-e4af-4b55-ae2e-5d7d763d883c"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 06:39:05.989] 	object given to template engine was:
I0814 06:39:05.990] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T06:39:05Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T06:39:05Z]] name:valid-pod namespace:namespace-1565764744-11319 resourceVersion:692 selfLink:/api/v1/namespaces/namespace-1565764744-11319/pods/valid-pod uid:f7f15c14-e4af-4b55-ae2e-5d7d763d883c] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 06:39:05.990] has:map has no entry for key "missing"
W0814 06:39:06.091] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 06:39:07.079] Successful
I0814 06:39:07.079] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 06:39:07.079] valid-pod   0/1     Pending   0          1s
I0814 06:39:07.079] STATUS      REASON          MESSAGE
I0814 06:39:07.079] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 06:39:07.080] has:STATUS
I0814 06:39:07.081] Successful
I0814 06:39:07.081] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 06:39:07.082] valid-pod   0/1     Pending   0          1s
I0814 06:39:07.082] STATUS      REASON          MESSAGE
I0814 06:39:07.082] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 06:39:07.082] has:valid-pod
I0814 06:39:08.161] Successful
I0814 06:39:08.161] message:pod/valid-pod
I0814 06:39:08.161] has not:STATUS
I0814 06:39:08.162] Successful
I0814 06:39:08.162] message:pod/valid-pod
... skipping 144 lines ...
I0814 06:39:09.262] status:
I0814 06:39:09.262]   phase: Pending
I0814 06:39:09.262]   qosClass: Guaranteed
I0814 06:39:09.262] ---
I0814 06:39:09.262] has:name: valid-pod
I0814 06:39:09.327] Successful
I0814 06:39:09.327] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 06:39:09.327] has:"invalid-pod" not found
I0814 06:39:09.401] pod "valid-pod" deleted
I0814 06:39:09.492] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:39:09.633] (Bpod/redis-master created
I0814 06:39:09.637] pod/valid-pod created
I0814 06:39:09.736] Successful
... skipping 35 lines ...
I0814 06:39:10.849] +++ command: run_kubectl_exec_pod_tests
I0814 06:39:10.863] +++ [0814 06:39:10] Creating namespace namespace-1565764750-22791
I0814 06:39:10.932] namespace/namespace-1565764750-22791 created
I0814 06:39:10.998] Context "test" modified.
I0814 06:39:11.007] +++ [0814 06:39:11] Testing kubectl exec POD COMMAND
I0814 06:39:11.086] Successful
I0814 06:39:11.086] message:Error from server (NotFound): pods "abc" not found
I0814 06:39:11.086] has:pods "abc" not found
I0814 06:39:11.242] pod/test-pod created
I0814 06:39:11.345] Successful
I0814 06:39:11.346] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:39:11.346] has not:pods "test-pod" not found
I0814 06:39:11.347] Successful
I0814 06:39:11.347] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:39:11.347] has not:pod or type/name must be specified
I0814 06:39:11.419] pod "test-pod" deleted
I0814 06:39:11.439] +++ exit code: 0
I0814 06:39:11.472] Recording: run_kubectl_exec_resource_name_tests
I0814 06:39:11.473] Running command: run_kubectl_exec_resource_name_tests
I0814 06:39:11.495] 
... skipping 2 lines ...
I0814 06:39:11.503] +++ command: run_kubectl_exec_resource_name_tests
I0814 06:39:11.516] +++ [0814 06:39:11] Creating namespace namespace-1565764751-15638
I0814 06:39:11.594] namespace/namespace-1565764751-15638 created
I0814 06:39:11.664] Context "test" modified.
I0814 06:39:11.672] +++ [0814 06:39:11] Testing kubectl exec TYPE/NAME COMMAND
I0814 06:39:11.773] Successful
I0814 06:39:11.773] message:error: the server doesn't have a resource type "foo"
I0814 06:39:11.774] has:error:
I0814 06:39:11.861] Successful
I0814 06:39:11.861] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 06:39:11.862] has:"bar" not found
I0814 06:39:12.005] pod/test-pod created
I0814 06:39:12.165] replicaset.apps/frontend created
W0814 06:39:12.265] I0814 06:39:12.170316   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764751-15638", Name:"frontend", UID:"47b957c2-a34b-4861-8b47-164572ff67a1", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jgnnn
W0814 06:39:12.266] I0814 06:39:12.174161   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764751-15638", Name:"frontend", UID:"47b957c2-a34b-4861-8b47-164572ff67a1", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vvpfn
W0814 06:39:12.266] I0814 06:39:12.176439   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764751-15638", Name:"frontend", UID:"47b957c2-a34b-4861-8b47-164572ff67a1", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x9fhb
I0814 06:39:12.367] configmap/test-set-env-config created
I0814 06:39:12.400] Successful
I0814 06:39:12.400] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 06:39:12.400] has:not implemented
I0814 06:39:12.485] Successful
I0814 06:39:12.485] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:39:12.485] has not:not found
I0814 06:39:12.487] Successful
I0814 06:39:12.487] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:39:12.487] has not:pod or type/name must be specified
I0814 06:39:12.585] Successful
I0814 06:39:12.585] message:Error from server (BadRequest): pod frontend-jgnnn does not have a host assigned
I0814 06:39:12.585] has not:not found
I0814 06:39:12.588] Successful
I0814 06:39:12.588] message:Error from server (BadRequest): pod frontend-jgnnn does not have a host assigned
I0814 06:39:12.588] has not:pod or type/name must be specified
I0814 06:39:12.667] pod "test-pod" deleted
I0814 06:39:12.753] replicaset.apps "frontend" deleted
I0814 06:39:12.839] configmap "test-set-env-config" deleted
I0814 06:39:12.859] +++ exit code: 0
I0814 06:39:12.897] Recording: run_create_secret_tests
I0814 06:39:12.897] Running command: run_create_secret_tests
I0814 06:39:12.924] 
I0814 06:39:12.927] +++ Running case: test-cmd.run_create_secret_tests 
I0814 06:39:12.930] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:39:12.932] +++ command: run_create_secret_tests
I0814 06:39:13.030] Successful
I0814 06:39:13.030] message:Error from server (NotFound): secrets "mysecret" not found
I0814 06:39:13.030] has:secrets "mysecret" not found
I0814 06:39:13.192] Successful
I0814 06:39:13.193] message:Error from server (NotFound): secrets "mysecret" not found
I0814 06:39:13.193] has:secrets "mysecret" not found
I0814 06:39:13.195] Successful
I0814 06:39:13.195] message:user-specified
I0814 06:39:13.195] has:user-specified
I0814 06:39:13.270] Successful
I0814 06:39:13.346] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"089d2e63-277b-4f2d-a3ff-3b7a2d1cc09e","resourceVersion":"766","creationTimestamp":"2019-08-14T06:39:13Z"}}
... skipping 2 lines ...
I0814 06:39:13.505] has:uid
I0814 06:39:13.585] Successful
I0814 06:39:13.586] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"089d2e63-277b-4f2d-a3ff-3b7a2d1cc09e","resourceVersion":"768","creationTimestamp":"2019-08-14T06:39:13Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T06:39:13Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 06:39:13.586] has:config1
I0814 06:39:13.658] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"089d2e63-277b-4f2d-a3ff-3b7a2d1cc09e"}}
I0814 06:39:13.750] Successful
I0814 06:39:13.751] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 06:39:13.751] has:configmaps "tester-update-cm" not found
I0814 06:39:13.765] +++ exit code: 0
I0814 06:39:13.800] Recording: run_kubectl_create_kustomization_directory_tests
I0814 06:39:13.800] Running command: run_kubectl_create_kustomization_directory_tests
I0814 06:39:13.823] 
I0814 06:39:13.826] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0814 06:39:16.440] valid-pod   0/1     Pending   0          0s
I0814 06:39:16.440] has:valid-pod
I0814 06:39:17.521] Successful
I0814 06:39:17.521] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 06:39:17.521] valid-pod   0/1     Pending   0          0s
I0814 06:39:17.521] STATUS      REASON          MESSAGE
I0814 06:39:17.521] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 06:39:17.521] has:Timeout exceeded while reading body
I0814 06:39:17.604] Successful
I0814 06:39:17.604] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 06:39:17.604] valid-pod   0/1     Pending   0          1s
I0814 06:39:17.604] has:valid-pod
I0814 06:39:17.673] Successful
I0814 06:39:17.673] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 06:39:17.673] has:Invalid timeout value
I0814 06:39:17.749] pod "valid-pod" deleted
I0814 06:39:17.770] +++ exit code: 0
I0814 06:39:17.808] Recording: run_crd_tests
I0814 06:39:17.809] Running command: run_crd_tests
I0814 06:39:17.831] 
... skipping 245 lines ...
I0814 06:39:22.428] foo.company.com/test patched
I0814 06:39:22.514] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 06:39:22.590] (Bfoo.company.com/test patched
I0814 06:39:22.684] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 06:39:22.768] (Bfoo.company.com/test patched
I0814 06:39:22.858] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 06:39:23.011] (B+++ [0814 06:39:23] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 06:39:23.071] {
I0814 06:39:23.071]     "apiVersion": "company.com/v1",
I0814 06:39:23.071]     "kind": "Foo",
I0814 06:39:23.071]     "metadata": {
I0814 06:39:23.071]         "annotations": {
I0814 06:39:23.072]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 354 lines ...
I0814 06:39:50.295] bar.company.com/test created
I0814 06:39:50.395] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 06:39:50.478] (Bnamespace "non-native-resources" deleted
I0814 06:39:55.682] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 06:39:55.836] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0814 06:39:55.936] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W0814 06:39:56.036] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 06:39:56.137] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 06:39:56.160] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 06:39:56.192] +++ exit code: 0
I0814 06:39:56.229] Recording: run_cmd_with_img_tests
I0814 06:39:56.230] Running command: run_cmd_with_img_tests
I0814 06:39:56.253] 
... skipping 6 lines ...
I0814 06:39:56.426] +++ [0814 06:39:56] Testing cmd with image
I0814 06:39:56.521] Successful
I0814 06:39:56.521] message:deployment.apps/test1 created
I0814 06:39:56.521] has:deployment.apps/test1 created
I0814 06:39:56.601] deployment.apps "test1" deleted
I0814 06:39:56.700] Successful
I0814 06:39:56.701] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 06:39:56.701] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 06:39:56.715] +++ exit code: 0
I0814 06:39:56.752] +++ [0814 06:39:56] Testing recursive resources
I0814 06:39:56.758] +++ [0814 06:39:56] Creating namespace namespace-1565764796-498
I0814 06:39:56.827] namespace/namespace-1565764796-498 created
I0814 06:39:56.903] Context "test" modified.
I0814 06:39:57.002] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:39:57.285] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:57.287] (BSuccessful
I0814 06:39:57.288] message:pod/busybox0 created
I0814 06:39:57.288] pod/busybox1 created
I0814 06:39:57.288] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 06:39:57.288] has:error validating data: kind not set
I0814 06:39:57.382] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:57.554] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 06:39:57.557] (BSuccessful
I0814 06:39:57.558] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:39:57.558] has:Object 'Kind' is missing
I0814 06:39:57.652] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:57.945] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 06:39:57.947] (BSuccessful
I0814 06:39:57.947] message:pod/busybox0 replaced
I0814 06:39:57.947] pod/busybox1 replaced
I0814 06:39:57.947] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 06:39:57.948] has:error validating data: kind not set
I0814 06:39:58.040] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:58.138] (BSuccessful
I0814 06:39:58.138] message:Name:         busybox0
I0814 06:39:58.139] Namespace:    namespace-1565764796-498
I0814 06:39:58.139] Priority:     0
I0814 06:39:58.139] Node:         <none>
... skipping 159 lines ...
I0814 06:39:58.164] has:Object 'Kind' is missing
I0814 06:39:58.236] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:58.410] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 06:39:58.412] (BSuccessful
I0814 06:39:58.413] message:pod/busybox0 annotated
I0814 06:39:58.413] pod/busybox1 annotated
I0814 06:39:58.413] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:39:58.413] has:Object 'Kind' is missing
I0814 06:39:58.500] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:58.756] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 06:39:58.758] (BSuccessful
I0814 06:39:58.759] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 06:39:58.759] pod/busybox0 configured
I0814 06:39:58.759] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 06:39:58.759] pod/busybox1 configured
I0814 06:39:58.759] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 06:39:58.759] has:error validating data: kind not set
I0814 06:39:58.842] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:39:58.986] (Bdeployment.apps/nginx created
I0814 06:39:59.093] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 06:39:59.183] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 06:39:59.351] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0814 06:39:59.353] (BSuccessful
... skipping 42 lines ...
I0814 06:39:59.429] deployment.apps "nginx" deleted
I0814 06:39:59.533] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:59.702] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:59.704] (BSuccessful
I0814 06:39:59.704] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 06:39:59.705] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 06:39:59.705] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:39:59.705] has:Object 'Kind' is missing
I0814 06:39:59.795] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:39:59.876] (BSuccessful
I0814 06:39:59.877] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:39:59.877] has:busybox0:busybox1:
I0814 06:39:59.878] Successful
I0814 06:39:59.879] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:39:59.879] has:Object 'Kind' is missing
I0814 06:39:59.972] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:00.064] (Bpod/busybox0 labeled
I0814 06:40:00.065] pod/busybox1 labeled
I0814 06:40:00.065] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:40:00.158] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 06:40:00.161] (BSuccessful
I0814 06:40:00.161] message:pod/busybox0 labeled
I0814 06:40:00.161] pod/busybox1 labeled
I0814 06:40:00.161] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:40:00.161] has:Object 'Kind' is missing
I0814 06:40:00.248] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:00.337] (Bpod/busybox0 patched
I0814 06:40:00.337] pod/busybox1 patched
I0814 06:40:00.337] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:40:00.427] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 06:40:00.430] (BSuccessful
I0814 06:40:00.430] message:pod/busybox0 patched
I0814 06:40:00.430] pod/busybox1 patched
I0814 06:40:00.431] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:40:00.431] has:Object 'Kind' is missing
I0814 06:40:00.523] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:00.709] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:00.712] (BSuccessful
I0814 06:40:00.712] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 06:40:00.712] pod "busybox0" force deleted
I0814 06:40:00.712] pod "busybox1" force deleted
I0814 06:40:00.712] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:40:00.713] has:Object 'Kind' is missing
I0814 06:40:00.803] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:00.949] (Breplicationcontroller/busybox0 created
I0814 06:40:00.958] replicationcontroller/busybox1 created
I0814 06:40:01.055] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:01.154] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:01.250] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 06:40:01.342] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 06:40:01.524] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 06:40:01.615] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 06:40:01.618] (BSuccessful
I0814 06:40:01.619] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 06:40:01.619] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 06:40:01.620] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:01.620] has:Object 'Kind' is missing
I0814 06:40:01.699] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 06:40:01.783] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 06:40:01.881] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:01.972] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 06:40:02.058] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 06:40:02.244] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 06:40:02.329] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 06:40:02.331] (BSuccessful
I0814 06:40:02.331] message:service/busybox0 exposed
I0814 06:40:02.331] service/busybox1 exposed
I0814 06:40:02.332] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:02.332] has:Object 'Kind' is missing
I0814 06:40:02.419] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:02.506] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 06:40:02.592] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 06:40:02.784] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 06:40:02.870] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 06:40:02.872] (BSuccessful
I0814 06:40:02.872] message:replicationcontroller/busybox0 scaled
I0814 06:40:02.872] replicationcontroller/busybox1 scaled
I0814 06:40:02.873] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:02.873] has:Object 'Kind' is missing
I0814 06:40:02.967] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:03.156] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:03.159] (BSuccessful
I0814 06:40:03.160] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 06:40:03.160] replicationcontroller "busybox0" force deleted
I0814 06:40:03.161] replicationcontroller "busybox1" force deleted
I0814 06:40:03.161] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:03.162] has:Object 'Kind' is missing
I0814 06:40:03.251] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:03.398] (Bdeployment.apps/nginx1-deployment created
I0814 06:40:03.402] deployment.apps/nginx0-deployment created
I0814 06:40:03.508] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 06:40:03.602] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 06:40:03.811] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 06:40:03.813] (BSuccessful
I0814 06:40:03.813] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 06:40:03.813] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 06:40:03.814] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:40:03.814] has:Object 'Kind' is missing
I0814 06:40:03.912] deployment.apps/nginx1-deployment paused
I0814 06:40:03.919] deployment.apps/nginx0-deployment paused
W0814 06:40:04.020] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:40:04.021] I0814 06:39:56.510222   53182 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565764796-29297", Name:"test1", UID:"90f03491-6d1b-4931-b75f-8a489051974e", APIVersion:"apps/v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 06:40:04.021] I0814 06:39:56.516209   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-29297", Name:"test1-9797f89d8", UID:"b168324d-940d-415e-91f5-ac225386aad8", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-4x7g6
W0814 06:40:04.021] W0814 06:39:56.846187   49700 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:40:04.022] E0814 06:39:56.847946   53182 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.022] W0814 06:39:56.945637   49700 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:40:04.022] E0814 06:39:56.946989   53182 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.022] W0814 06:39:57.079141   49700 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:40:04.022] E0814 06:39:57.085602   53182 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.022] W0814 06:39:57.172079   49700 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:40:04.023] E0814 06:39:57.173353   53182 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.023] E0814 06:39:57.849218   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.023] E0814 06:39:57.948540   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.023] E0814 06:39:58.087241   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.023] E0814 06:39:58.174690   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.024] E0814 06:39:58.850620   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.024] E0814 06:39:58.950413   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.024] I0814 06:39:58.991541   53182 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565764796-498", Name:"nginx", UID:"852415e0-f07a-47a5-94a2-29b85d26b578", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 06:40:04.025] I0814 06:39:58.995724   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx-bbbbb95b5", UID:"120970a8-29d3-466d-b306-0778d0e5ccaa", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-bc7tw
W0814 06:40:04.025] I0814 06:39:58.998885   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx-bbbbb95b5", UID:"120970a8-29d3-466d-b306-0778d0e5ccaa", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-8txkm
W0814 06:40:04.025] I0814 06:39:59.000374   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx-bbbbb95b5", UID:"120970a8-29d3-466d-b306-0778d0e5ccaa", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-mbh4v
W0814 06:40:04.025] E0814 06:39:59.088367   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.026] E0814 06:39:59.175883   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.026] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 06:40:04.026] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 06:40:04.026] E0814 06:39:59.852175   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.026] E0814 06:39:59.951700   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.026] E0814 06:40:00.089910   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.027] E0814 06:40:00.177490   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.027] E0814 06:40:00.853760   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.027] E0814 06:40:00.953213   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.027] I0814 06:40:00.957322   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764796-498", Name:"busybox0", UID:"b325f1fc-046d-4284-8329-705fe91dfc85", APIVersion:"v1", ResourceVersion:"981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-v85pk
W0814 06:40:04.028] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:40:04.028] I0814 06:40:00.961548   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764796-498", Name:"busybox1", UID:"66c36b09-2137-4850-956f-7ea39d9bcdb7", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-8kpb7
W0814 06:40:04.028] E0814 06:40:01.091685   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.028] E0814 06:40:01.179157   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.028] E0814 06:40:01.855425   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.029] E0814 06:40:01.954464   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.029] E0814 06:40:02.093308   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.029] E0814 06:40:02.180755   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.029] I0814 06:40:02.684146   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764796-498", Name:"busybox0", UID:"b325f1fc-046d-4284-8329-705fe91dfc85", APIVersion:"v1", ResourceVersion:"1002", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hmsvs
W0814 06:40:04.030] I0814 06:40:02.694233   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764796-498", Name:"busybox1", UID:"66c36b09-2137-4850-956f-7ea39d9bcdb7", APIVersion:"v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wzxdj
W0814 06:40:04.030] E0814 06:40:02.856768   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.030] E0814 06:40:02.955790   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.030] E0814 06:40:03.094531   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.031] E0814 06:40:03.182121   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.031] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:40:04.031] I0814 06:40:03.402793   53182 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565764796-498", Name:"nginx1-deployment", UID:"2a99f2b2-09e3-44eb-bad4-58ee7d1fe786", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 06:40:04.031] I0814 06:40:03.406143   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx1-deployment-84f7f49fb7", UID:"7bf98744-ab19-4f05-836a-dfd9ac233a24", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-tqnk4
W0814 06:40:04.032] I0814 06:40:03.410291   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx1-deployment-84f7f49fb7", UID:"7bf98744-ab19-4f05-836a-dfd9ac233a24", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-4q29t
W0814 06:40:04.032] I0814 06:40:03.411003   53182 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565764796-498", Name:"nginx0-deployment", UID:"eb2c6536-34a1-487b-abe1-5fb685be1b82", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 06:40:04.032] I0814 06:40:03.415493   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx0-deployment-57475bf54d", UID:"fe65cfc3-d565-43f3-a3cf-e711136eaa17", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-kgxzk
W0814 06:40:04.033] I0814 06:40:03.418566   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565764796-498", Name:"nginx0-deployment-57475bf54d", UID:"fe65cfc3-d565-43f3-a3cf-e711136eaa17", APIVersion:"apps/v1", ResourceVersion:"1030", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-hrmsj
W0814 06:40:04.033] E0814 06:40:03.858719   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.033] E0814 06:40:03.957541   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.097] E0814 06:40:04.096402   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.184] E0814 06:40:04.183657   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:04.285] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 06:40:04.285] (BSuccessful
I0814 06:40:04.285] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:40:04.285] has:Object 'Kind' is missing
I0814 06:40:04.285] deployment.apps/nginx1-deployment resumed
I0814 06:40:04.286] deployment.apps/nginx0-deployment resumed
... skipping 7 lines ...
I0814 06:40:04.355] 1         <none>
I0814 06:40:04.355] 
I0814 06:40:04.355] deployment.apps/nginx0-deployment 
I0814 06:40:04.355] REVISION  CHANGE-CAUSE
I0814 06:40:04.355] 1         <none>
I0814 06:40:04.355] 
I0814 06:40:04.357] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:40:04.357] has:nginx0-deployment
I0814 06:40:04.357] Successful
I0814 06:40:04.357] message:deployment.apps/nginx1-deployment 
I0814 06:40:04.358] REVISION  CHANGE-CAUSE
I0814 06:40:04.358] 1         <none>
I0814 06:40:04.358] 
I0814 06:40:04.358] deployment.apps/nginx0-deployment 
I0814 06:40:04.358] REVISION  CHANGE-CAUSE
I0814 06:40:04.358] 1         <none>
I0814 06:40:04.358] 
I0814 06:40:04.358] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:40:04.358] has:nginx1-deployment
I0814 06:40:04.360] Successful
I0814 06:40:04.361] message:deployment.apps/nginx1-deployment 
I0814 06:40:04.361] REVISION  CHANGE-CAUSE
I0814 06:40:04.361] 1         <none>
I0814 06:40:04.361] 
I0814 06:40:04.361] deployment.apps/nginx0-deployment 
I0814 06:40:04.361] REVISION  CHANGE-CAUSE
I0814 06:40:04.361] 1         <none>
I0814 06:40:04.361] 
I0814 06:40:04.362] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:40:04.362] has:Object 'Kind' is missing
I0814 06:40:04.439] deployment.apps "nginx1-deployment" force deleted
I0814 06:40:04.444] deployment.apps "nginx0-deployment" force deleted
W0814 06:40:04.545] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 06:40:04.546] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 06:40:04.861] E0814 06:40:04.860449   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:04.960] E0814 06:40:04.959417   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:05.099] E0814 06:40:05.098806   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:05.186] E0814 06:40:05.185462   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:05.549] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:05.714] (Breplicationcontroller/busybox0 created
I0814 06:40:05.721] replicationcontroller/busybox1 created
I0814 06:40:05.828] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:40:05.922] (BSuccessful
I0814 06:40:05.922] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0814 06:40:05.925] message:no rollbacker has been implemented for "ReplicationController"
I0814 06:40:05.925] no rollbacker has been implemented for "ReplicationController"
I0814 06:40:05.925] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:05.926] has:Object 'Kind' is missing
I0814 06:40:06.016] Successful
I0814 06:40:06.017] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:06.017] error: replicationcontrollers "busybox0" pausing is not supported
I0814 06:40:06.017] error: replicationcontrollers "busybox1" pausing is not supported
I0814 06:40:06.018] has:Object 'Kind' is missing
I0814 06:40:06.020] Successful
I0814 06:40:06.020] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:06.020] error: replicationcontrollers "busybox0" pausing is not supported
I0814 06:40:06.020] error: replicationcontrollers "busybox1" pausing is not supported
I0814 06:40:06.020] has:replicationcontrollers "busybox0" pausing is not supported
I0814 06:40:06.022] Successful
I0814 06:40:06.022] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:06.023] error: replicationcontrollers "busybox0" pausing is not supported
I0814 06:40:06.023] error: replicationcontrollers "busybox1" pausing is not supported
I0814 06:40:06.023] has:replicationcontrollers "busybox1" pausing is not supported
I0814 06:40:06.126] Successful
I0814 06:40:06.127] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:06.127] error: replicationcontrollers "busybox0" resuming is not supported
I0814 06:40:06.127] error: replicationcontrollers "busybox1" resuming is not supported
I0814 06:40:06.127] has:Object 'Kind' is missing
I0814 06:40:06.128] Successful
I0814 06:40:06.129] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:06.129] error: replicationcontrollers "busybox0" resuming is not supported
I0814 06:40:06.129] error: replicationcontrollers "busybox1" resuming is not supported
I0814 06:40:06.130] has:replicationcontrollers "busybox0" resuming is not supported
I0814 06:40:06.131] Successful
I0814 06:40:06.132] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:40:06.132] error: replicationcontrollers "busybox0" resuming is not supported
I0814 06:40:06.132] error: replicationcontrollers "busybox1" resuming is not supported
I0814 06:40:06.132] has:replicationcontrollers "busybox0" resuming is not supported
I0814 06:40:06.209] replicationcontroller "busybox0" force deleted
I0814 06:40:06.215] replicationcontroller "busybox1" force deleted
W0814 06:40:06.316] I0814 06:40:05.718928   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764796-498", Name:"busybox0", UID:"4077ad49-3407-43de-b2d0-dc6ad2c5f0cd", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-pwqs8
W0814 06:40:06.316] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:40:06.317] I0814 06:40:05.727434   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764796-498", Name:"busybox1", UID:"1d5b6d80-1ae9-4412-bccc-2e34d747fdc1", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-nxc6h
W0814 06:40:06.317] E0814 06:40:05.861992   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:06.317] E0814 06:40:05.961007   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:06.317] E0814 06:40:06.100343   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:06.318] E0814 06:40:06.187220   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:06.318] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 06:40:06.318] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0814 06:40:06.864] E0814 06:40:06.863701   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:06.963] E0814 06:40:06.962656   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:07.102] E0814 06:40:07.101955   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:07.189] E0814 06:40:07.188890   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:07.290] Recording: run_namespace_tests
I0814 06:40:07.290] Running command: run_namespace_tests
I0814 06:40:07.290] 
I0814 06:40:07.290] +++ Running case: test-cmd.run_namespace_tests 
I0814 06:40:07.290] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:40:07.290] +++ command: run_namespace_tests
I0814 06:40:07.291] +++ [0814 06:40:07] Testing kubectl(v1:namespaces)
I0814 06:40:07.340] namespace/my-namespace created
I0814 06:40:07.441] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 06:40:07.515] (Bnamespace "my-namespace" deleted
W0814 06:40:07.866] E0814 06:40:07.865379   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:07.964] E0814 06:40:07.964185   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:08.104] E0814 06:40:08.103409   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:08.190] E0814 06:40:08.190240   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:08.867] E0814 06:40:08.866804   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:08.966] E0814 06:40:08.965642   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:09.105] E0814 06:40:09.104878   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:09.192] E0814 06:40:09.191629   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:09.869] E0814 06:40:09.868349   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:09.968] E0814 06:40:09.967205   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:10.107] E0814 06:40:10.106412   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:10.194] E0814 06:40:10.193301   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:10.870] E0814 06:40:10.869906   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:10.969] E0814 06:40:10.968920   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:11.109] E0814 06:40:11.108931   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:11.195] E0814 06:40:11.194926   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:11.872] E0814 06:40:11.871967   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:11.971] E0814 06:40:11.970858   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:12.111] E0814 06:40:12.110818   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:12.197] E0814 06:40:12.196892   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:12.608] namespace/my-namespace condition met
I0814 06:40:12.704] Successful
I0814 06:40:12.704] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 06:40:12.704] has: not found
I0814 06:40:12.782] namespace/my-namespace created
I0814 06:40:12.885] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 06:40:13.096] (BSuccessful
I0814 06:40:13.097] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 06:40:13.097] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 06:40:13.100] namespace "namespace-1565764754-11066" deleted
I0814 06:40:13.100] namespace "namespace-1565764755-5968" deleted
I0814 06:40:13.100] namespace "namespace-1565764757-1387" deleted
I0814 06:40:13.100] namespace "namespace-1565764759-25700" deleted
I0814 06:40:13.101] namespace "namespace-1565764796-29297" deleted
I0814 06:40:13.101] namespace "namespace-1565764796-498" deleted
I0814 06:40:13.101] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 06:40:13.101] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 06:40:13.101] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 06:40:13.101] has:warning: deleting cluster-scoped resources
I0814 06:40:13.101] Successful
I0814 06:40:13.102] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 06:40:13.102] namespace "kube-node-lease" deleted
I0814 06:40:13.102] namespace "my-namespace" deleted
I0814 06:40:13.102] namespace "namespace-1565764662-31034" deleted
... skipping 27 lines ...
I0814 06:40:13.106] namespace "namespace-1565764754-11066" deleted
I0814 06:40:13.106] namespace "namespace-1565764755-5968" deleted
I0814 06:40:13.106] namespace "namespace-1565764757-1387" deleted
I0814 06:40:13.106] namespace "namespace-1565764759-25700" deleted
I0814 06:40:13.106] namespace "namespace-1565764796-29297" deleted
I0814 06:40:13.106] namespace "namespace-1565764796-498" deleted
I0814 06:40:13.106] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 06:40:13.106] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 06:40:13.107] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 06:40:13.107] has:namespace "my-namespace" deleted
W0814 06:40:13.207] E0814 06:40:12.873776   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:13.208] E0814 06:40:12.972229   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:13.208] E0814 06:40:13.112303   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:13.208] E0814 06:40:13.198425   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:13.309] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 06:40:13.309] (Bnamespace/other created
I0814 06:40:13.394] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 06:40:13.489] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:13.663] (Bpod/valid-pod created
I0814 06:40:13.772] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:40:13.868] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:40:13.948] (BSuccessful
I0814 06:40:13.948] message:error: a resource cannot be retrieved by name across all namespaces
I0814 06:40:13.949] has:a resource cannot be retrieved by name across all namespaces
I0814 06:40:14.047] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:40:14.133] (Bpod "valid-pod" force deleted
I0814 06:40:14.234] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:14.315] (Bnamespace "other" deleted
W0814 06:40:14.416] E0814 06:40:13.875287   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:14.416] E0814 06:40:13.974094   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:14.416] E0814 06:40:14.113838   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:14.416] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 06:40:14.417] I0814 06:40:14.168501   53182 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 06:40:14.417] E0814 06:40:14.199838   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:14.417] I0814 06:40:14.268856   53182 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 06:40:14.581] I0814 06:40:14.581039   53182 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 06:40:14.682] I0814 06:40:14.681387   53182 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 06:40:14.877] E0814 06:40:14.876812   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:14.976] E0814 06:40:14.975747   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:15.116] E0814 06:40:15.115420   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:15.202] E0814 06:40:15.201447   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:15.879] E0814 06:40:15.878546   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:15.978] E0814 06:40:15.977391   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:16.117] E0814 06:40:16.116788   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:16.203] E0814 06:40:16.202896   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:16.423] I0814 06:40:16.423238   53182 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565764796-498
W0814 06:40:16.427] I0814 06:40:16.426910   53182 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565764796-498
W0814 06:40:16.881] E0814 06:40:16.880502   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:16.979] E0814 06:40:16.978933   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:17.119] E0814 06:40:17.118483   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:17.205] E0814 06:40:17.204541   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:17.882] E0814 06:40:17.882157   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:17.981] E0814 06:40:17.980269   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:18.121] E0814 06:40:18.120244   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:18.220] E0814 06:40:18.219710   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:18.885] E0814 06:40:18.884264   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:18.983] E0814 06:40:18.982179   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:19.123] E0814 06:40:19.122931   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:19.225] E0814 06:40:19.224511   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:19.431] +++ exit code: 0
I0814 06:40:19.466] Recording: run_secrets_test
I0814 06:40:19.467] Running command: run_secrets_test
I0814 06:40:19.488] 
I0814 06:40:19.491] +++ Running case: test-cmd.run_secrets_test 
I0814 06:40:19.494] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I0814 06:40:21.528] (Bsecret "test-secret" deleted
I0814 06:40:21.615] secret/test-secret created
I0814 06:40:21.710] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 06:40:21.806] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 06:40:21.884] (Bsecret "test-secret" deleted
W0814 06:40:21.985] I0814 06:40:19.734696   70156 loader.go:375] Config loaded from file:  /tmp/tmp.yQUpioMIk3/.kube/config
W0814 06:40:21.986] E0814 06:40:19.886312   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.986] E0814 06:40:19.983699   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.986] E0814 06:40:20.124533   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.986] E0814 06:40:20.226300   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.986] E0814 06:40:20.888339   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.987] E0814 06:40:20.985213   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.987] E0814 06:40:21.127972   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.987] E0814 06:40:21.227966   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.987] E0814 06:40:21.889744   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:21.988] E0814 06:40:21.986980   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:22.088] secret/secret-string-data created
I0814 06:40:22.164] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 06:40:22.260] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 06:40:22.355] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 06:40:22.436] (Bsecret "secret-string-data" deleted
I0814 06:40:22.530] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:22.689] (Bsecret "test-secret" deleted
I0814 06:40:22.775] namespace "test-secrets" deleted
W0814 06:40:22.876] E0814 06:40:22.129828   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:22.877] E0814 06:40:22.229443   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:22.892] E0814 06:40:22.891500   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:22.989] E0814 06:40:22.988698   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:23.132] E0814 06:40:23.131803   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:23.232] E0814 06:40:23.231370   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:23.894] E0814 06:40:23.893300   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:23.991] E0814 06:40:23.990435   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:24.134] E0814 06:40:24.133690   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:24.233] E0814 06:40:24.233230   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:24.895] E0814 06:40:24.894991   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:24.993] E0814 06:40:24.992304   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:25.136] E0814 06:40:25.135724   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:25.236] E0814 06:40:25.235327   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:25.898] E0814 06:40:25.896833   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:25.995] E0814 06:40:25.994345   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:26.138] E0814 06:40:26.137612   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:26.238] E0814 06:40:26.237287   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:26.899] E0814 06:40:26.898807   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:26.997] E0814 06:40:26.996279   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:27.140] E0814 06:40:27.139534   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:27.240] E0814 06:40:27.239409   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:27.901] E0814 06:40:27.900346   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:28.000] E0814 06:40:27.999289   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:28.101] +++ exit code: 0
I0814 06:40:28.101] Recording: run_configmap_tests
I0814 06:40:28.101] Running command: run_configmap_tests
I0814 06:40:28.102] 
I0814 06:40:28.102] +++ Running case: test-cmd.run_configmap_tests 
I0814 06:40:28.102] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:40:28.103] +++ command: run_configmap_tests
I0814 06:40:28.103] +++ [0814 06:40:28] Creating namespace namespace-1565764828-12007
I0814 06:40:28.184] namespace/namespace-1565764828-12007 created
I0814 06:40:28.276] Context "test" modified.
I0814 06:40:28.287] +++ [0814 06:40:28] Testing configmaps
W0814 06:40:28.388] E0814 06:40:28.141649   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:28.389] E0814 06:40:28.241447   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:28.550] configmap/test-configmap created
I0814 06:40:28.693] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 06:40:28.801] (Bconfigmap "test-configmap" deleted
W0814 06:40:28.903] E0814 06:40:28.902901   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:29.002] E0814 06:40:29.001260   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:29.103] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 06:40:29.103] (Bnamespace/test-configmaps created
I0814 06:40:29.193] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
I0814 06:40:29.329] (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I0814 06:40:29.463] (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
I0814 06:40:29.575] (Bconfigmap/test-configmap created
W0814 06:40:29.676] E0814 06:40:29.143998   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:29.677] E0814 06:40:29.244390   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:29.778] configmap/test-binary-configmap created
I0814 06:40:29.834] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 06:40:29.954] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 06:40:30.220] (Bconfigmap "test-configmap" deleted
I0814 06:40:30.311] configmap "test-binary-configmap" deleted
I0814 06:40:30.400] namespace "test-configmaps" deleted
W0814 06:40:30.500] E0814 06:40:29.904406   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:30.501] E0814 06:40:30.003349   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:30.501] E0814 06:40:30.145728   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:30.502] E0814 06:40:30.246210   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:30.907] E0814 06:40:30.906179   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:31.005] E0814 06:40:31.005057   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:31.148] E0814 06:40:31.147853   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:31.249] E0814 06:40:31.248264   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:31.908] E0814 06:40:31.907884   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:32.007] E0814 06:40:32.006802   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:32.150] E0814 06:40:32.149424   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:32.251] E0814 06:40:32.250388   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:32.910] E0814 06:40:32.909743   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:33.009] E0814 06:40:33.008770   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:33.151] E0814 06:40:33.151244   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:33.253] E0814 06:40:33.252426   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:33.911] E0814 06:40:33.911246   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:34.011] E0814 06:40:34.010629   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:34.153] E0814 06:40:34.152992   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:34.254] E0814 06:40:34.254156   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:34.913] E0814 06:40:34.912952   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:35.013] E0814 06:40:35.012689   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:35.155] E0814 06:40:35.154997   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:35.256] E0814 06:40:35.255823   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:35.517] +++ exit code: 0
I0814 06:40:35.557] Recording: run_client_config_tests
I0814 06:40:35.558] Running command: run_client_config_tests
I0814 06:40:35.581] 
I0814 06:40:35.583] +++ Running case: test-cmd.run_client_config_tests 
I0814 06:40:35.586] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:40:35.588] +++ command: run_client_config_tests
I0814 06:40:35.604] +++ [0814 06:40:35] Creating namespace namespace-1565764835-6378
I0814 06:40:35.678] namespace/namespace-1565764835-6378 created
I0814 06:40:35.751] Context "test" modified.
I0814 06:40:35.761] +++ [0814 06:40:35] Testing client config
I0814 06:40:35.839] Successful
I0814 06:40:35.840] message:error: stat missing: no such file or directory
I0814 06:40:35.840] has:missing: no such file or directory
I0814 06:40:35.916] Successful
I0814 06:40:35.917] message:error: stat missing: no such file or directory
I0814 06:40:35.917] has:missing: no such file or directory
I0814 06:40:35.990] Successful
I0814 06:40:35.990] message:error: stat missing: no such file or directory
I0814 06:40:35.990] has:missing: no such file or directory
I0814 06:40:36.069] Successful
I0814 06:40:36.069] message:Error in configuration: context was not found for specified context: missing-context
I0814 06:40:36.069] has:context was not found for specified context: missing-context
I0814 06:40:36.140] Successful
I0814 06:40:36.140] message:error: no server found for cluster "missing-cluster"
I0814 06:40:36.140] has:no server found for cluster "missing-cluster"
I0814 06:40:36.209] Successful
I0814 06:40:36.210] message:error: auth info "missing-user" does not exist
I0814 06:40:36.210] has:auth info "missing-user" does not exist
W0814 06:40:36.311] E0814 06:40:35.914873   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:36.312] E0814 06:40:36.014476   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:36.312] E0814 06:40:36.156692   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:36.312] E0814 06:40:36.257711   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:36.413] Successful
I0814 06:40:36.414] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 06:40:36.414] has:error loading config file
I0814 06:40:36.429] Successful
I0814 06:40:36.430] message:error: stat missing-config: no such file or directory
I0814 06:40:36.430] has:no such file or directory
I0814 06:40:36.444] +++ exit code: 0
I0814 06:40:36.483] Recording: run_service_accounts_tests
I0814 06:40:36.483] Running command: run_service_accounts_tests
I0814 06:40:36.506] 
I0814 06:40:36.509] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0814 06:40:36.859] (Bnamespace/test-service-accounts created
I0814 06:40:36.959] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 06:40:37.031] (Bserviceaccount/test-service-account created
I0814 06:40:37.126] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 06:40:37.205] (Bserviceaccount "test-service-account" deleted
I0814 06:40:37.290] namespace "test-service-accounts" deleted
W0814 06:40:37.391] E0814 06:40:36.916694   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:37.392] E0814 06:40:37.015794   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:37.392] E0814 06:40:37.158415   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:37.392] E0814 06:40:37.259251   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:37.919] E0814 06:40:37.918436   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:38.018] E0814 06:40:38.017284   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:38.161] E0814 06:40:38.160377   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:38.261] E0814 06:40:38.260907   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:38.920] E0814 06:40:38.919908   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:39.019] E0814 06:40:39.018953   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:39.164] E0814 06:40:39.163715   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:39.263] E0814 06:40:39.262409   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:39.922] E0814 06:40:39.921215   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:40.021] E0814 06:40:40.020638   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:40.166] E0814 06:40:40.165700   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:40.265] E0814 06:40:40.264689   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:40.923] E0814 06:40:40.922899   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:41.022] E0814 06:40:41.022208   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:41.167] E0814 06:40:41.167257   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:41.267] E0814 06:40:41.266784   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:41.925] E0814 06:40:41.924435   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:42.024] E0814 06:40:42.023838   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:42.170] E0814 06:40:42.169348   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:42.269] E0814 06:40:42.268342   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:42.419] +++ exit code: 0
I0814 06:40:42.460] Recording: run_job_tests
I0814 06:40:42.460] Running command: run_job_tests
I0814 06:40:42.484] 
I0814 06:40:42.486] +++ Running case: test-cmd.run_job_tests 
I0814 06:40:42.489] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0814 06:40:43.278] Labels:                        run=pi
I0814 06:40:43.278] Annotations:                   <none>
I0814 06:40:43.278] Schedule:                      59 23 31 2 *
I0814 06:40:43.278] Concurrency Policy:            Allow
I0814 06:40:43.279] Suspend:                       False
I0814 06:40:43.279] Successful Job History Limit:  3
I0814 06:40:43.279] Failed Job History Limit:      1
I0814 06:40:43.279] Starting Deadline Seconds:     <unset>
I0814 06:40:43.280] Selector:                      <unset>
I0814 06:40:43.280] Parallelism:                   <unset>
I0814 06:40:43.280] Completions:                   <unset>
I0814 06:40:43.280] Pod Template:
I0814 06:40:43.281]   Labels:  run=pi
... skipping 32 lines ...
I0814 06:40:43.827]                 run=pi
I0814 06:40:43.827] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 06:40:43.827] Controlled By:  CronJob/pi
I0814 06:40:43.827] Parallelism:    1
I0814 06:40:43.827] Completions:    1
I0814 06:40:43.827] Start Time:     Wed, 14 Aug 2019 06:40:43 +0000
I0814 06:40:43.828] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 06:40:43.828] Pod Template:
I0814 06:40:43.828]   Labels:  controller-uid=551e9f0d-2c93-45ab-8720-08039bf01f06
I0814 06:40:43.828]            job-name=test-job
I0814 06:40:43.828]            run=pi
I0814 06:40:43.828]   Containers:
I0814 06:40:43.828]    pi:
... skipping 15 lines ...
I0814 06:40:43.830]   Type    Reason            Age   From            Message
I0814 06:40:43.831]   ----    ------            ----  ----            -------
I0814 06:40:43.831]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-pxskz
I0814 06:40:43.911] job.batch "test-job" deleted
I0814 06:40:43.991] cronjob.batch "pi" deleted
I0814 06:40:44.072] namespace "test-jobs" deleted
W0814 06:40:44.173] E0814 06:40:42.925614   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.173] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:40:44.173] E0814 06:40:43.025581   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.173] E0814 06:40:43.171153   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.174] E0814 06:40:43.269655   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.174] I0814 06:40:43.555185   53182 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"551e9f0d-2c93-45ab-8720-08039bf01f06", APIVersion:"batch/v1", ResourceVersion:"1353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pxskz
W0814 06:40:44.174] E0814 06:40:43.926598   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.175] E0814 06:40:44.027332   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.175] E0814 06:40:44.174042   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.271] E0814 06:40:44.271278   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:44.928] E0814 06:40:44.928091   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:45.029] E0814 06:40:45.028994   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:45.176] E0814 06:40:45.175417   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:45.273] E0814 06:40:45.272653   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:45.930] E0814 06:40:45.929523   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:46.030] E0814 06:40:46.030206   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:46.177] E0814 06:40:46.177089   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:46.274] E0814 06:40:46.274229   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:46.932] E0814 06:40:46.931313   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:47.032] E0814 06:40:47.031778   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:47.179] E0814 06:40:47.179000   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:47.276] E0814 06:40:47.275501   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:47.933] E0814 06:40:47.932683   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:48.034] E0814 06:40:48.033483   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:48.181] E0814 06:40:48.180600   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:48.277] E0814 06:40:48.277226   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:48.934] E0814 06:40:48.934059   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:49.035] E0814 06:40:49.035107   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:49.182] E0814 06:40:49.181846   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:49.279] E0814 06:40:49.278744   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:49.380] +++ exit code: 0
I0814 06:40:49.380] Recording: run_create_job_tests
I0814 06:40:49.380] Running command: run_create_job_tests
I0814 06:40:49.380] 
I0814 06:40:49.380] +++ Running case: test-cmd.run_create_job_tests 
I0814 06:40:49.380] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 29 lines ...
I0814 06:40:50.871] (Bpodtemplate/nginx created
I0814 06:40:50.971] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 06:40:51.038] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 06:40:51.038] nginx   nginx        nginx    name=nginx
W0814 06:40:51.139] I0814 06:40:49.494440   53182 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565764849-14902", Name:"test-job", UID:"1944f9df-d63d-42a1-91d0-2245d021fda6", APIVersion:"batch/v1", ResourceVersion:"1371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-np6mb
W0814 06:40:51.140] I0814 06:40:49.748664   53182 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565764849-14902", Name:"test-job-pi", UID:"42886a5d-0cdb-4853-ae4d-6cc58f7a8960", APIVersion:"batch/v1", ResourceVersion:"1378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-ktptf
W0814 06:40:51.140] E0814 06:40:49.935846   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.140] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:40:51.141] E0814 06:40:50.036664   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.141] I0814 06:40:50.107124   53182 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565764849-14902", Name:"my-pi", UID:"e5b46a74-8da0-46f2-871b-5a204c772dd4", APIVersion:"batch/v1", ResourceVersion:"1387", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-8s7s4
W0814 06:40:51.142] E0814 06:40:50.184225   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.142] E0814 06:40:50.279633   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.142] I0814 06:40:50.868368   49700 controller.go:606] quota admission added evaluator for: podtemplates
W0814 06:40:51.143] E0814 06:40:50.937334   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.143] E0814 06:40:51.037950   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.186] E0814 06:40:51.185680   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:51.282] E0814 06:40:51.281359   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:51.383] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 06:40:51.383] (Bpodtemplate "nginx" deleted
I0814 06:40:51.392] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:51.408] (B+++ exit code: 0
I0814 06:40:51.445] Recording: run_service_tests
I0814 06:40:51.446] Running command: run_service_tests
... skipping 65 lines ...
I0814 06:40:52.297] Port:              <unset>  6379/TCP
I0814 06:40:52.297] TargetPort:        6379/TCP
I0814 06:40:52.297] Endpoints:         <none>
I0814 06:40:52.297] Session Affinity:  None
I0814 06:40:52.297] Events:            <none>
I0814 06:40:52.297] (B
W0814 06:40:52.398] E0814 06:40:51.938727   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:52.398] E0814 06:40:52.039581   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:52.398] E0814 06:40:52.186886   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:52.399] E0814 06:40:52.282868   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:52.499] Successful describe services:
I0814 06:40:52.500] Name:              kubernetes
I0814 06:40:52.500] Namespace:         default
I0814 06:40:52.500] Labels:            component=apiserver
I0814 06:40:52.500]                    provider=kubernetes
I0814 06:40:52.500] Annotations:       <none>
... skipping 177 lines ...
I0814 06:40:52.971]     role: padawan
I0814 06:40:52.971]   sessionAffinity: None
I0814 06:40:52.971]   type: ClusterIP
I0814 06:40:52.971] status:
I0814 06:40:52.971]   loadBalancer: {}
I0814 06:40:53.046] service/redis-master selector updated
W0814 06:40:53.146] E0814 06:40:52.940243   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:53.147] E0814 06:40:53.040821   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:53.189] E0814 06:40:53.188610   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:53.285] E0814 06:40:53.284402   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:53.385] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I0814 06:40:53.386] (Bservice/redis-master selector updated
I0814 06:40:53.386] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 06:40:53.405] (BapiVersion: v1
I0814 06:40:53.405] kind: Service
I0814 06:40:53.405] metadata:
... skipping 49 lines ...
I0814 06:40:53.409]   selector:
I0814 06:40:53.409]     role: padawan
I0814 06:40:53.409]   sessionAffinity: None
I0814 06:40:53.409]   type: ClusterIP
I0814 06:40:53.409] status:
I0814 06:40:53.410]   loadBalancer: {}
W0814 06:40:53.510] error: you must specify resources by --filename when --local is set.
W0814 06:40:53.510] Example resource specifications include:
W0814 06:40:53.510]    '-f rsrc.yaml'
W0814 06:40:53.510]    '--filename=rsrc.json'
I0814 06:40:53.611] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 06:40:53.734] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:40:53.815] (Bservice "redis-master" deleted
... skipping 2 lines ...
I0814 06:40:54.161] (Bservice/redis-master created
I0814 06:40:54.261] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:40:54.364] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:40:54.523] (Bservice/service-v1-test created
I0814 06:40:54.624] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 06:40:54.790] (Bservice/service-v1-test replaced
W0814 06:40:54.891] E0814 06:40:53.941950   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:54.892] E0814 06:40:54.042373   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:54.892] E0814 06:40:54.190060   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:54.892] E0814 06:40:54.285696   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:54.944] E0814 06:40:54.943760   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:55.044] E0814 06:40:55.044239   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:55.145] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 06:40:55.146] (Bservice "redis-master" deleted
I0814 06:40:55.146] service "service-v1-test" deleted
I0814 06:40:55.169] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:40:55.261] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:40:55.424] (Bservice/redis-master created
W0814 06:40:55.525] E0814 06:40:55.191741   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:55.526] E0814 06:40:55.287304   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:55.626] service/redis-slave created
I0814 06:40:55.702] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 06:40:55.787] (BSuccessful
I0814 06:40:55.788] message:NAME           RSRC
I0814 06:40:55.788] kubernetes     144
I0814 06:40:55.788] redis-master   1420
... skipping 29 lines ...
I0814 06:40:57.508] +++ [0814 06:40:57] Creating namespace namespace-1565764857-14236
I0814 06:40:57.586] namespace/namespace-1565764857-14236 created
I0814 06:40:57.659] Context "test" modified.
I0814 06:40:57.667] +++ [0814 06:40:57] Testing kubectl(v1:daemonsets)
I0814 06:40:57.763] apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:40:57.936] (Bdaemonset.apps/bind created
W0814 06:40:58.037] E0814 06:40:55.945409   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.038] E0814 06:40:56.045993   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.038] E0814 06:40:56.193462   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.038] E0814 06:40:56.288831   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.038] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:40:58.039] I0814 06:40:56.809433   53182 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"fecbd82c-f049-4fa2-9f90-77fc1508b5e8", APIVersion:"apps/v1", ResourceVersion:"1437", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0814 06:40:58.039] I0814 06:40:56.816223   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"1b73bbc2-8949-45d6-aec4-93e467d38925", APIVersion:"apps/v1", ResourceVersion:"1438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-nz6sj
W0814 06:40:58.039] I0814 06:40:56.820086   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"1b73bbc2-8949-45d6-aec4-93e467d38925", APIVersion:"apps/v1", ResourceVersion:"1438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-gdhbn
W0814 06:40:58.040] E0814 06:40:56.946946   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.040] E0814 06:40:57.047485   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.040] E0814 06:40:57.195131   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.040] E0814 06:40:57.290300   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.040] I0814 06:40:57.933585   49700 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0814 06:40:58.041] I0814 06:40:57.944639   49700 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0814 06:40:58.041] E0814 06:40:57.947923   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:40:58.049] E0814 06:40:58.049116   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:40:58.150] apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
I0814 06:40:58.209] (Bdaemonset.apps/bind configured
I0814 06:40:58.310] apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
I0814 06:40:58.401] (Bdaemonset.apps/bind image updated
I0814 06:40:58.498] apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
I0814 06:40:58.585] (Bdaemonset.apps/bind env updated
... skipping 40 lines ...
I0814 06:41:00.739]   Volumes:	<none>
I0814 06:41:00.739]  (dry run)
I0814 06:41:00.830] apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 06:41:00.922] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 06:41:01.017] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0814 06:41:01.129] (Bdaemonset.apps/bind rolled back
W0814 06:41:01.229] E0814 06:40:58.196635   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.230] E0814 06:40:58.291789   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.230] E0814 06:40:58.949820   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.231] E0814 06:40:59.050528   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.231] E0814 06:40:59.198208   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.231] E0814 06:40:59.293206   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.232] E0814 06:40:59.951367   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.232] E0814 06:41:00.052243   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.232] E0814 06:41:00.200241   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.233] E0814 06:41:00.294752   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.233] E0814 06:41:00.953372   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.233] E0814 06:41:01.053953   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.234] E0814 06:41:01.201805   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:01.297] E0814 06:41:01.296568   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:41:01.398] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 06:41:01.398] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 06:41:01.441] (BSuccessful
I0814 06:41:01.441] message:error: unable to find specified revision 1000000 in history
I0814 06:41:01.441] has:unable to find specified revision
I0814 06:41:01.536] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 06:41:01.630] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 06:41:01.733] (Bdaemonset.apps/bind rolled back
I0814 06:41:01.829] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 06:41:01.924] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 13 lines ...
I0814 06:41:02.448] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:41:02.609] (Breplicationcontroller/frontend created
I0814 06:41:02.690] replicationcontroller "frontend" deleted
I0814 06:41:02.797] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:41:02.895] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:41:03.060] (Breplicationcontroller/frontend created
W0814 06:41:03.160] E0814 06:41:01.954769   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.161] E0814 06:41:02.055467   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.161] E0814 06:41:02.203587   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.161] E0814 06:41:02.298468   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.162] I0814 06:41:02.616486   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764862-265", Name:"frontend", UID:"b0736515-29b2-46f8-85c4-54abb58e62c6", APIVersion:"v1", ResourceVersion:"1514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kcmjd
W0814 06:41:03.162] I0814 06:41:02.619568   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764862-265", Name:"frontend", UID:"b0736515-29b2-46f8-85c4-54abb58e62c6", APIVersion:"v1", ResourceVersion:"1514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r4zmc
W0814 06:41:03.162] I0814 06:41:02.620283   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764862-265", Name:"frontend", UID:"b0736515-29b2-46f8-85c4-54abb58e62c6", APIVersion:"v1", ResourceVersion:"1514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vl2br
W0814 06:41:03.163] E0814 06:41:02.956382   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.163] E0814 06:41:03.056777   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.163] I0814 06:41:03.065233   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764862-265", Name:"frontend", UID:"436051e6-5327-4036-b225-6e621a80ae30", APIVersion:"v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n6qrj
W0814 06:41:03.163] I0814 06:41:03.068739   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764862-265", Name:"frontend", UID:"436051e6-5327-4036-b225-6e621a80ae30", APIVersion:"v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x9hmh
W0814 06:41:03.164] I0814 06:41:03.071718   53182 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565764862-265", Name:"frontend", UID:"436051e6-5327-4036-b225-6e621a80ae30", APIVersion:"v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2762c
W0814 06:41:03.206] E0814 06:41:03.205283   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:41:03.300] E0814 06:41:03.299790   53182 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:41:03.401] core.sh:1059: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0814 06:41:03.401] (Bcore.sh:1061: Successful describe rc frontend:
I0814 06:41:03.401] Name:         frontend
I0814 06:41:03.401] Namespace:    namespace-1565764862-265
I0814 06:41:03.401] Selector:     app=guestbook,tier=frontend
I0814 06:41:03.402] Labels:       app=guestbook
I0814 06:41:03.402]               tier=frontend
I0814 06:41:03.402] Annotations:  <none>
I0814 06:41:03.402] Replicas:     3 current / 3 desired
I0814 06:41:03.402] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:03.402] Pod Template:
I0814 06:41:03.403]   Labels:  app=guestbook
I0814 06:41:03.403]            tier=frontend
I0814 06:41:03.403]   Containers:
I0814 06:41:03.403]    php-redis:
I0814 06:41:03.403]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0814 06:41:03.462] Namespace:    namespace-1565764862-265
I0814 06:41:03.462] Selector:     app=guestbook,tier=frontend
I0814 06:41:03.463] Labels:       app=guestbook
I0814 06:41:03.463]               tier=frontend
I0814 06:41:03.463] Annotations:  <none>
I0814 06:41:03.463] Replicas:     3 current / 3 desired
I0814 06:41:03.463] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:03.464] Pod Template:
I0814 06:41:03.464]   Labels:  app=guestbook
I0814 06:41:03.464]            tier=frontend
I0814 06:41:03.464]   Containers:
I0814 06:41:03.464]    php-redis:
I0814 06:41:03.464]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0814 06:41:03.579] Namespace:    namespace-1565764862-265
I0814 06:41:03.580] Selector:     app=guestbook,tier=frontend
I0814 06:41:03.580] Labels:       app=guestbook
I0814 06:41:03.580]               tier=frontend
I0814 06:41:03.580] Annotations:  <none>
I0814 06:41:03.580] Replicas:     3 current / 3 desired
I0814 06:41:03.580] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:03.580] Pod Template:
I0814 06:41:03.580]   Labels:  app=guestbook
I0814 06:41:03.580]            tier=frontend
I0814 06:41:03.580]   Containers:
I0814 06:41:03.580]    php-redis:
I0814 06:41:03.581]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0814 06:41:03.699] Namespace:    namespace-1565764862-265
I0814 06:41:03.699] Selector:     app=guestbook,tier=frontend
I0814 06:41:03.699] Labels:       app=guestbook
I0814 06:41:03.699]               tier=frontend
I0814 06:41:03.699] Annotations:  <none>
I0814 06:41:03.700] Replicas:     3 current / 3 desired
I0814 06:41:03.700] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:03.700] Pod Template:
I0814 06:41:03.700]   Labels:  app=guestbook
I0814 06:41:03.700]            tier=frontend
I0814 06:41:03.700]   Containers:
I0814 06:41:03.700]    php-redis:
I0814 06:41:03.700]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0814 06:41:03.859] Namespace:    namespace-1565764862-265
I0814 06:41:03.859] Selector:     app=guestbook,tier=frontend
I0814 06:41:03.859] Labels:       app=guestbook
I0814 06:41:03.859]               tier=frontend
I0814 06:41:03.859] Annotations:  <none>
I0814 06:41:03.859] Replicas:     3 current / 3 desired
I0814 06:41:03.859] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:03.859] Pod Template:
I0814 06:41:03.859]   Labels:  app=guestbook
I0814 06:41:03.860]            tier=frontend
I0814 06:41:03.860]   Containers:
I0814 06:41:03.860]    php-redis:
I0814 06:41:03.860]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0814 06:41:03.975] Namespace:    namespace-1565764862-265
I0814 06:41:03.975] Selector:     app=guestbook,tier=frontend
I0814 06:41:03.975] Labels:       app=guestbook
I0814 06:41:03.975]               tier=frontend
I0814 06:41:03.976] Annotations:  <none>
I0814 06:41:03.976] Replicas:     3 current / 3 desired
I0814 06:41:03.976] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:03.976] Pod Template:
I0814 06:41:03.976]   Labels:  app=guestbook
I0814 06:41:03.976]            tier=frontend
I0814 06:41:03.976]   Containers:
I0814 06:41:03.976]    php-redis:
I0814 06:41:03.977]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0814 06:41:04.083] Namespace:    namespace-1565764862-265
I0814 06:41:04.083] Selector:     app=guestbook,tier=frontend
I0814 06:41:04.083] Labels:       app=guestbook
I0814 06:41:04.084]               tier=frontend
I0814 06:41:04.084] Annotations:  <none>
I0814 06:41:04.084] Replicas:     3 current / 3 desired
I0814 06:41:04.084] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 06:41:04.084] Pod Template:
I0814 06:41:04.084]   Labels:  app=guestbook
I0814 06:41:04.084]            tier=frontend
I0814 06:41:04.084]   Containers:
I0814 06:41:04.084]    php-redis:
I0814 06:41:04.085]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...<