This job view page is being replaced by Spyglass soon. Check out the new job view.
PRehashman: Drop deprecated cadvisor metric labels
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 10:31
Elapsed30m42s
Revision
Buildergke-prow-ssd-pool-1a225945-l1d7
Refs master:34791349
80376:e0b66c79
pod86ff9eca-be7e-11e9-ac8f-6e56e203dc81
infra-commit381773791
pod86ff9eca-be7e-11e9-ac8f-6e56e203dc81
repok8s.io/kubernetes
repo-commiteeec166cfb48cc8a46efa5c91f2b93d4007e7098
repos{u'k8s.io/kubernetes': u'master:34791349d656a9f8e45b7093012e29ad08782ffa,80376:e0b66c792b8daabc8f2dcc5209356bca2cb2b197'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 10:57:25.419406  110577 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 10:57:25.419444  110577 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 10:57:25.419472  110577 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 10:57:25.419484  110577 master.go:234] Using reconciler: 
I0814 10:57:25.421265  110577 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.421487  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.421506  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.421607  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.421836  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.422466  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.422686  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.423054  110577 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 10:57:25.423200  110577 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 10:57:25.423545  110577 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.423794  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.423810  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.423889  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.424004  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.424609  110577 watch_cache.go:405] Replace watchCache (rev: 29469) 
I0814 10:57:25.425054  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.425750  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.426075  110577 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 10:57:25.426172  110577 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 10:57:25.426243  110577 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.426796  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.426898  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.427035  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.428107  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.429197  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.429668  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.429887  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.429966  110577 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 10:57:25.429996  110577 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 10:57:25.430500  110577 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.430732  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.431252  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.431510  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.431346  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.432024  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.432815  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.432874  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.432959  110577 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 10:57:25.433051  110577 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 10:57:25.434250  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.434361  110577 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.434468  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.434480  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.434514  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.434591  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.434902  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.434949  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.435737  110577 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 10:57:25.435769  110577 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 10:57:25.435920  110577 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.435994  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.436004  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.436038  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.436088  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.436429  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.436470  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.436557  110577 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 10:57:25.436637  110577 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 10:57:25.436695  110577 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.436766  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.436777  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.436801  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.437024  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.437167  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.437781  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.437858  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.437914  110577 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 10:57:25.437968  110577 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 10:57:25.438074  110577 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.438140  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.438150  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.438181  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.438224  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.438330  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.438455  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.438563  110577 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 10:57:25.438587  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.438615  110577 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 10:57:25.438706  110577 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.438765  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.438789  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.438817  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.438860  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.439131  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.439205  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.439238  110577 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 10:57:25.439355  110577 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 10:57:25.439411  110577 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.439510  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.439521  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.439616  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.439699  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.439747  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.440171  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.440274  110577 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 10:57:25.440435  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.440439  110577 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.440504  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.440511  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.440521  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.440632  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.440652  110577 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 10:57:25.440778  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.440862  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.441121  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.441162  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.441293  110577 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 10:57:25.441477  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.441575  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.441585  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.441618  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.441673  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.441847  110577 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 10:57:25.441947  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.442073  110577 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 10:57:25.442208  110577 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.442270  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.442280  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.442320  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.442374  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.442425  110577 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 10:57:25.442608  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.442612  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.442795  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.442899  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.442988  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.443011  110577 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 10:57:25.443069  110577 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 10:57:25.443146  110577 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.443217  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.443227  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.443286  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.443386  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.443425  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.443722  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.443786  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.443815  110577 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 10:57:25.443840  110577 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.443884  110577 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 10:57:25.443937  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.443945  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.443971  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.444025  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.445181  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.445352  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.445414  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.445467  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.445476  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.445505  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.445571  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.445838  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.446086  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.446409  110577 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.446495  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.446507  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.446556  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.446617  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.446891  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.446970  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.447050  110577 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 10:57:25.447103  110577 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 10:57:25.447738  110577 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.447921  110577 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.448606  110577 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.449251  110577 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.449853  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.449862  110577 watch_cache.go:405] Replace watchCache (rev: 29470) 
I0814 10:57:25.450142  110577 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.450963  110577 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.451405  110577 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.451520  110577 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.451707  110577 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.452657  110577 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.453482  110577 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.453717  110577 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.454851  110577 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.455136  110577 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.455639  110577 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.455831  110577 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.456409  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.456593  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.457220  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.457428  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.457625  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.457757  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.457912  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.458594  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.458824  110577 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.460198  110577 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.461186  110577 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.461814  110577 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.462206  110577 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.463467  110577 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.463885  110577 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.465661  110577 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.468140  110577 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.468958  110577 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.469910  110577 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.470318  110577 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.470564  110577 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 10:57:25.470674  110577 master.go:434] Enabling API group "authentication.k8s.io".
I0814 10:57:25.470759  110577 master.go:434] Enabling API group "authorization.k8s.io".
I0814 10:57:25.471028  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.471255  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.471343  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.471514  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.471692  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.472633  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.472798  110577 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 10:57:25.472890  110577 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 10:57:25.472985  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.472972  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.473223  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.473306  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.473422  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.473620  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.474808  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.474943  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.474817  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.475018  110577 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 10:57:25.475052  110577 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 10:57:25.475456  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.475680  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.475767  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.475873  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.475995  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.476372  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.476515  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.476918  110577 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 10:57:25.476961  110577 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 10:57:25.477035  110577 master.go:434] Enabling API group "autoscaling".
I0814 10:57:25.477331  110577 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.477591  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.477681  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.477852  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.478023  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.478450  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.478679  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.478799  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.479180  110577 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 10:57:25.477891  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.479262  110577 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 10:57:25.479562  110577 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.480162  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.480253  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.480665  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.480784  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.481072  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.481489  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.481654  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.481880  110577 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 10:57:25.481995  110577 master.go:434] Enabling API group "batch".
I0814 10:57:25.482052  110577 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 10:57:25.482984  110577 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.483029  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.483197  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.483289  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.483333  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.483470  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.483984  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.484176  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.484394  110577 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 10:57:25.484515  110577 master.go:434] Enabling API group "certificates.k8s.io".
I0814 10:57:25.484815  110577 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.485025  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.485138  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.485277  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.485421  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.486030  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.486294  110577 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 10:57:25.486465  110577 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.486645  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.486661  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.486787  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.486878  110577 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 10:57:25.486568  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.487376  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.487920  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.488309  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.488680  110577 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 10:57:25.489210  110577 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 10:57:25.484876  110577 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 10:57:25.489357  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.489406  110577 master.go:434] Enabling API group "coordination.k8s.io".
I0814 10:57:25.490009  110577 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.490154  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.490244  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.490280  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.490375  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.490685  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.491263  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.491715  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.491826  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.492104  110577 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 10:57:25.492158  110577 master.go:434] Enabling API group "extensions".
I0814 10:57:25.492197  110577 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 10:57:25.492320  110577 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.492600  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.492631  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.492681  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.492799  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.493108  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.493221  110577 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 10:57:25.493580  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.493664  110577 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 10:57:25.493589  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.493611  110577 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.494412  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.494518  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.494685  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.494828  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.495030  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.495587  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.495844  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.496067  110577 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 10:57:25.496242  110577 master.go:434] Enabling API group "networking.k8s.io".
I0814 10:57:25.496379  110577 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.496195  110577 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 10:57:25.496687  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.496812  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.497010  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.497178  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.497269  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.498340  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.498495  110577 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 10:57:25.498630  110577 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 10:57:25.498636  110577 master.go:434] Enabling API group "node.k8s.io".
I0814 10:57:25.498513  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.499064  110577 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.499257  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.499347  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.499449  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.499627  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.500095  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.500230  110577 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 10:57:25.500337  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.500391  110577 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.500603  110577 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 10:57:25.500667  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.500748  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.500787  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.500819  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.500835  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.501709  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.501782  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.501836  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.502147  110577 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 10:57:25.502167  110577 master.go:434] Enabling API group "policy".
I0814 10:57:25.502199  110577 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.502253  110577 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 10:57:25.502262  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.502390  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.502445  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.502493  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.502809  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.502914  110577 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 10:57:25.503112  110577 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.503204  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.503287  110577 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 10:57:25.503515  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.503632  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.504038  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.504124  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.504990  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
I0814 10:57:25.504990  110577 watch_cache.go:405] Replace watchCache (rev: 29471) 
E0814 10:57:25.505002  110577 factory.go:599] Error getting pod permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/test-pod for retry: Get http://127.0.0.1:36377/api/v1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/pods/test-pod: dial tcp 127.0.0.1:36377: connect: connection refused; retrying...
I0814 10:57:25.506788  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.506893  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.507026  110577 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 10:57:25.507078  110577 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.507275  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.507390  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.507501  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.507201  110577 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 10:57:25.507791  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.508072  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.508185  110577 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 10:57:25.508373  110577 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.508448  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.508458  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.508498  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.508571  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.508612  110577 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 10:57:25.508826  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.509211  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.509434  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.509650  110577 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 10:57:25.509809  110577 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.510101  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.510221  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.510308  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.510484  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.509970  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.510734  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.509702  110577 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 10:57:25.510987  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.511100  110577 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 10:57:25.511379  110577 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.511471  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.511487  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.511518  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.511581  110577 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 10:57:25.511771  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.511589  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.512224  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.512317  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.512491  110577 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 10:57:25.512522  110577 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 10:57:25.512553  110577 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.512618  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.512627  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.512656  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.512716  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.513390  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.513421  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.513519  110577 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 10:57:25.513652  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.513699  110577 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 10:57:25.513932  110577 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.514016  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.514060  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.514100  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.514165  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.515805  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.515824  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.515965  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.516085  110577 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 10:57:25.516117  110577 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 10:57:25.516174  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.516255  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.516340  110577 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 10:57:25.517368  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.518591  110577 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.518701  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.518714  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.518759  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.518919  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.519775  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.519911  110577 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 10:57:25.519880  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.520050  110577 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 10:57:25.520321  110577 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.520443  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.520453  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.521211  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.521512  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.521610  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.522043  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.522285  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.522577  110577 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 10:57:25.522747  110577 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 10:57:25.523213  110577 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 10:57:25.522777  110577 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 10:57:25.524087  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.524328  110577 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.524390  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.524407  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.524430  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.524471  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.524831  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.524947  110577 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 10:57:25.525153  110577 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.525230  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.525241  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.525250  110577 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 10:57:25.525278  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.525344  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.525391  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.525612  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.525765  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.526219  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.526393  110577 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 10:57:25.526062  110577 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 10:57:25.526913  110577 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.527185  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.527195  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.527282  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.527344  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.528352  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.528448  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.528604  110577 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 10:57:25.528606  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.528637  110577 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 10:57:25.528646  110577 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.528713  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.528723  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.528821  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.528984  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.529278  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.529375  110577 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 10:57:25.529524  110577 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.529586  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.529622  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.529632  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.529662  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.529715  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.530015  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.530148  110577 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 10:57:25.530228  110577 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 10:57:25.530254  110577 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.530301  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.530310  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.530331  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.530378  110577 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 10:57:25.530394  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.530512  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.531268  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.531416  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.531507  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.531656  110577 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 10:57:25.531714  110577 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 10:57:25.531716  110577 master.go:434] Enabling API group "storage.k8s.io".
I0814 10:57:25.532018  110577 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.532099  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.532109  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.532141  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.532207  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.532523  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.533034  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.533448  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.533636  110577 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 10:57:25.533762  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.533894  110577 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 10:57:25.534672  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.534677  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.535604  110577 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.535706  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.535717  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.535749  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.535796  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.536085  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.536149  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.536228  110577 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 10:57:25.536277  110577 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 10:57:25.536372  110577 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.536455  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.536466  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.536495  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.536574  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.536977  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.537012  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.537142  110577 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 10:57:25.537204  110577 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 10:57:25.537297  110577 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.537370  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.537381  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.537419  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.537472  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.537655  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.537745  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.537852  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.537853  110577 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 10:57:25.537874  110577 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 10:57:25.538036  110577 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.538106  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.538115  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.538145  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.538199  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.539216  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.539291  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.539329  110577 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 10:57:25.539350  110577 master.go:434] Enabling API group "apps".
I0814 10:57:25.539369  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.539382  110577 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.539458  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.539469  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.539517  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.539520  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.539609  110577 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 10:57:25.539615  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.539863  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.539974  110577 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 10:57:25.540011  110577 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.540074  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.540083  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.540112  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.540154  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.540184  110577 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 10:57:25.540385  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.540802  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.540872  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.540903  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.540914  110577 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 10:57:25.540939  110577 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.540969  110577 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 10:57:25.540986  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.540993  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.541037  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.541120  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.541421  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.541505  110577 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 10:57:25.541544  110577 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.541599  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.541606  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.541627  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.541657  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.541680  110577 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 10:57:25.541876  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.542144  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.542218  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.542237  110577 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 10:57:25.542256  110577 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 10:57:25.542286  110577 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.542324  110577 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 10:57:25.542478  110577 client.go:354] parsed scheme: ""
I0814 10:57:25.542489  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:25.542517  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:25.542591  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.544077  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.544164  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:25.544281  110577 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 10:57:25.544303  110577 master.go:434] Enabling API group "events.k8s.io".
I0814 10:57:25.544342  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.544667  110577 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.544821  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.544940  110577 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.545169  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:25.545233  110577 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 10:57:25.545341  110577 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.545480  110577 watch_cache.go:405] Replace watchCache (rev: 29472) 
I0814 10:57:25.545497  110577 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.545664  110577 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.545788  110577 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.545984  110577 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.546109  110577 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.546219  110577 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.546317  110577 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.547056  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.547233  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.547867  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.547936  110577 watch_cache.go:405] Replace watchCache (rev: 29473) 
I0814 10:57:25.548079  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.548779  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.549043  110577 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.549611  110577 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.549774  110577 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.550544  110577 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.550768  110577 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 10:57:25.550876  110577 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 10:57:25.551418  110577 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.551513  110577 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.551763  110577 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.552313  110577 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.553104  110577 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.553932  110577 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.554157  110577 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.554861  110577 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.555502  110577 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.555846  110577 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.556382  110577 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 10:57:25.556474  110577 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 10:57:25.557181  110577 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.557368  110577 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.557769  110577 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.558405  110577 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.558936  110577 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.559686  110577 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.560459  110577 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.560900  110577 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.561256  110577 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.561752  110577 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.562287  110577 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 10:57:25.562356  110577 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 10:57:25.562882  110577 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.563550  110577 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 10:57:25.563600  110577 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 10:57:25.564008  110577 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.564463  110577 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.564709  110577 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.565142  110577 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.565501  110577 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.565852  110577 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.566259  110577 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 10:57:25.566320  110577 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 10:57:25.566929  110577 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.567523  110577 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.567762  110577 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.568365  110577 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.568704  110577 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.569064  110577 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.569774  110577 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.570178  110577 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.570621  110577 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.571427  110577 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.571763  110577 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.572104  110577 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 10:57:25.572248  110577 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 10:57:25.572321  110577 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 10:57:25.572966  110577 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.573567  110577 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.574156  110577 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.574749  110577 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.575554  110577 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7fe2b4a8-1453-420f-9a14-754514060671", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 10:57:25.577963  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.577992  110577 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 10:57:25.578004  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.578015  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.578024  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.578066  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.578099  110577 httplog.go:90] GET /healthz: (233.71µs) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:25.579434  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.587806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.582168  110577 httplog.go:90] GET /api/v1/services: (1.097348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.588659  110577 httplog.go:90] GET /api/v1/services: (1.214704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.591291  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.591323  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.591337  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.591347  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.591357  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.591383  110577 httplog.go:90] GET /healthz: (193.047µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:25.592329  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.357958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.593489  110577 httplog.go:90] GET /api/v1/services: (828.46µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:25.593725  110577 httplog.go:90] GET /api/v1/services: (960.638µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.594907  110577 httplog.go:90] POST /api/v1/namespaces: (1.871892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37462]
I0814 10:57:25.596396  110577 httplog.go:90] GET /api/v1/namespaces/kube-public: (794.03µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.598116  110577 httplog.go:90] POST /api/v1/namespaces: (1.274457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.599248  110577 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (716.588µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.600790  110577 httplog.go:90] POST /api/v1/namespaces: (1.294198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.679273  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.679679  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.679807  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.679898  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.679978  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.680034  110577 httplog.go:90] GET /healthz: (999.215µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:25.692152  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.692188  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.692201  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.692211  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.692219  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.692254  110577 httplog.go:90] GET /healthz: (258.734µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.779130  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.779175  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.779189  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.779199  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.779208  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.779242  110577 httplog.go:90] GET /healthz: (276.571µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:25.792214  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.792258  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.792272  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.792283  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.792292  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.792355  110577 httplog.go:90] GET /healthz: (282.722µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.879087  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.879124  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.879137  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.879146  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.879153  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.879210  110577 httplog.go:90] GET /healthz: (257.702µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:25.892195  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.892267  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.892282  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.892294  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.892304  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.892342  110577 httplog.go:90] GET /healthz: (305.765µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:25.979128  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.979193  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.979207  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.979217  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.979225  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.979258  110577 httplog.go:90] GET /healthz: (314.009µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:25.992262  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:25.992312  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:25.992326  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:25.992336  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:25.992345  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:25.992391  110577 httplog.go:90] GET /healthz: (280.807µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.079211  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.079276  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.079291  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.079303  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.079312  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.079355  110577 httplog.go:90] GET /healthz: (339.954µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:26.092221  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.092261  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.092275  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.092285  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.092293  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.092324  110577 httplog.go:90] GET /healthz: (271.841µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.179172  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.179215  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.179227  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.179240  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.179260  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.179294  110577 httplog.go:90] GET /healthz: (319.782µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:26.192231  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.192287  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.192300  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.192310  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.192318  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.192350  110577 httplog.go:90] GET /healthz: (272.546µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.279055  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.279092  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.279105  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.279127  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.279135  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.279169  110577 httplog.go:90] GET /healthz: (295.85µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:26.292190  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.292235  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.292248  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.292258  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.292266  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.292302  110577 httplog.go:90] GET /healthz: (263.359µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.379136  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.379171  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.379182  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.379188  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.379194  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.379223  110577 httplog.go:90] GET /healthz: (267.415µs) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:26.392067  110577 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 10:57:26.392101  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.392111  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.392117  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.392123  110577 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.392148  110577 httplog.go:90] GET /healthz: (197.043µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.419051  110577 client.go:354] parsed scheme: ""
I0814 10:57:26.419087  110577 client.go:354] scheme "" not registered, fallback to default scheme
I0814 10:57:26.419134  110577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 10:57:26.419192  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:26.419673  110577 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 10:57:26.419707  110577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 10:57:26.480735  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.480769  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.480780  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.480788  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.480826  110577 httplog.go:90] GET /healthz: (1.205826ms) 0 [Go-http-client/1.1 127.0.0.1:37460]
I0814 10:57:26.493067  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.493301  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.493382  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.493448  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.493661  110577 httplog.go:90] GET /healthz: (1.674141ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.580959  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.580991  110577 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 10:57:26.581002  110577 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 10:57:26.581010  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 10:57:26.581045  110577 httplog.go:90] GET /healthz: (2.247455ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:26.581076  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.233046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.581435  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.121672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.581614  110577 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.091595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37476]
I0814 10:57:26.583633  110577 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.404989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37476]
I0814 10:57:26.583791  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.353017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.584040  110577 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 10:57:26.584209  110577 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.958862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.585461  110577 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.272799ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.586221  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.932903ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.587492  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (885.714µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.589143  110577 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (3.535865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.590457  110577 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.769728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37460]
I0814 10:57:26.590492  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.642041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.590783  110577 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 10:57:26.590812  110577 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 10:57:26.591515  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (704.655µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.592910  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (856.484µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.593036  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.593062  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.593086  110577 httplog.go:90] GET /healthz: (1.297857ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.594246  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (889.411µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.595472  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (866.699µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.596602  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (711.285µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.598797  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.859056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.598984  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 10:57:26.600086  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (950.702µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.602211  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.602465  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 10:57:26.603601  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (807.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.605393  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.332718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.605755  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 10:57:26.606936  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (887.586µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.608944  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.609112  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 10:57:26.610303  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (850.57µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.612349  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.72876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.612792  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 10:57:26.613990  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (894.743µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.616224  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.670332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.616513  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 10:57:26.618169  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.343192ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.622962  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.273247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.630670  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 10:57:26.632181  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.104315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.641500  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.145337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.641857  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 10:57:26.643229  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.183563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.645798  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.151977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.646088  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 10:57:26.648423  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (793.767µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.651078  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.131248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.651339  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 10:57:26.652967  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.482158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.655139  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.865759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.655380  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 10:57:26.656606  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.084623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.658961  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.933094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.659206  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 10:57:26.660513  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.172875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.662910  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.040726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.663108  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 10:57:26.664332  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.09221ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.666398  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.590145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.666603  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 10:57:26.668014  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.25768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.670378  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.057129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.670707  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 10:57:26.672325  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.347079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.674232  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.674588  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 10:57:26.676379  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.652932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.678888  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.137517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.679079  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 10:57:26.680364  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.16346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.682610  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.919923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.683028  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 10:57:26.683944  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.683977  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.684013  110577 httplog.go:90] GET /healthz: (1.071315ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:26.684375  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.201894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.686596  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.886245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.686794  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 10:57:26.687859  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (886.623µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.689822  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.690148  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 10:57:26.691144  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (853.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.693008  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.693034  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.693065  110577 httplog.go:90] GET /healthz: (1.246525ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.693853  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.372144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.694176  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 10:57:26.695239  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (901.941µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.697390  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.518937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.698055  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 10:57:26.699017  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (776.102µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.700795  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.36431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.701094  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 10:57:26.702094  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (840.444µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.704316  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.655946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.704661  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 10:57:26.705955  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (970.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.708020  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.773141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.708236  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 10:57:26.709414  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.026996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.711541  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.711784  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 10:57:26.713480  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.55479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.715575  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.759209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.715775  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 10:57:26.716957  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.034991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.719183  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.897895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.719571  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 10:57:26.720778  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (951.649µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.723322  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.804461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.723638  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 10:57:26.724817  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (941.321µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.726849  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.698196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.727040  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 10:57:26.729023  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.832484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.731917  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.556645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.732252  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 10:57:26.733795  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.387484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.735950  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.736137  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 10:57:26.737356  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.09939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.742020  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.169954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.742232  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 10:57:26.743917  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.521683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.746328  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.700582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.746522  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 10:57:26.747729  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (991.2µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.750116  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.930384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.750511  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 10:57:26.752286  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.230704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.755053  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.116044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.755237  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 10:57:26.756486  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.100357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.759848  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.855574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.760208  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 10:57:26.761876  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.388132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.764258  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.968649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.764469  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 10:57:26.765869  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.221845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.768055  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.691663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.768258  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 10:57:26.769667  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.262904ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.771837  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.809193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.772074  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 10:57:26.773430  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.119635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.775662  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.633617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.775919  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 10:57:26.777379  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.259065ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.783034  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.783065  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.783104  110577 httplog.go:90] GET /healthz: (4.398043ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:26.783343  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.403665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.783886  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 10:57:26.785131  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.021847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.792170  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.342706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.792420  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 10:57:26.794104  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.794138  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.794178  110577 httplog.go:90] GET /healthz: (1.939469ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.794242  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.62832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.800916  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.132903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.801312  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 10:57:26.803598  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.670407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.806191  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.014399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.806570  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 10:57:26.807895  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (927.812µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.812688  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.356399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.813223  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 10:57:26.814564  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (875.389µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.817024  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.123542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.817370  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 10:57:26.818879  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.084313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.821064  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.804215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.821237  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 10:57:26.822790  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.428744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.826189  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.069513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.826433  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 10:57:26.832237  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (5.631758ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.835281  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.269767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.835493  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 10:57:26.837086  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.399568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.839670  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.983238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.839949  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 10:57:26.841262  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.071832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.843794  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.962224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.843978  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 10:57:26.859690  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.756258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:26.888347  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.888383  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.888404  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.487094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.888428  110577 httplog.go:90] GET /healthz: (4.662208ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:26.888716  110577 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 10:57:26.893033  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.893067  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.893111  110577 httplog.go:90] GET /healthz: (1.170725ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.902760  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.436059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.920781  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.842402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.921069  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 10:57:26.939726  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.502831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.960765  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.82857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.961059  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 10:57:26.979780  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.766825ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:26.980500  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.980551  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.980590  110577 httplog.go:90] GET /healthz: (1.134946ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:26.993281  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:26.993321  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:26.993368  110577 httplog.go:90] GET /healthz: (1.361399ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.000420  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.519499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.000690  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 10:57:27.019234  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.261669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.040171  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.266978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.040630  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 10:57:27.059499  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.607319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.081017  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.964482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.081246  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 10:57:27.082109  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.082134  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.082164  110577 httplog.go:90] GET /healthz: (1.955831ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:27.093109  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.093139  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.093189  110577 httplog.go:90] GET /healthz: (1.179122ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.099174  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.263896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.120674  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.614052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.120928  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 10:57:27.139360  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.396943ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.160338  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.384559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.160916  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 10:57:27.184750  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.184781  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.184830  110577 httplog.go:90] GET /healthz: (5.993674ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:27.184875  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (6.967132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.198653  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.198683  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.198718  110577 httplog.go:90] GET /healthz: (1.578568ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.204670  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.872109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.204938  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 10:57:27.223316  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (4.448832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.240889  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.891148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.241202  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 10:57:27.259586  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.643474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.279748  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.793604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.279792  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.279814  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.279850  110577 httplog.go:90] GET /healthz: (680.524µs) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:27.280003  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 10:57:27.293224  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.293265  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.293322  110577 httplog.go:90] GET /healthz: (1.305762ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.299458  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.496289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.320949  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.953426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.321215  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 10:57:27.339699  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.810664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.360504  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.541315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.360936  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 10:57:27.379450  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.553892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.379585  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.379611  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.379644  110577 httplog.go:90] GET /healthz: (916.531µs) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:27.393306  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.393345  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.393398  110577 httplog.go:90] GET /healthz: (1.323294ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.400512  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.625646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.400883  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 10:57:27.419596  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.571354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.440687  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.730754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.440964  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 10:57:27.459623  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.571763ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.479920  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.917113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.480038  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.480064  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.480098  110577 httplog.go:90] GET /healthz: (1.046937ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:27.480114  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 10:57:27.493337  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.493375  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.493432  110577 httplog.go:90] GET /healthz: (1.42343ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.499200  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.327361ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.520298  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.36535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.520785  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 10:57:27.539480  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.564551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.560786  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.587146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.561039  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 10:57:27.579446  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.49203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.581237  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.581267  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.581304  110577 httplog.go:90] GET /healthz: (1.987247ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:27.593282  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.593320  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.593363  110577 httplog.go:90] GET /healthz: (1.334502ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.600015  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.209677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.600265  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 10:57:27.619448  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.432046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.640259  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.279659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.640584  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 10:57:27.659606  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.603631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.679866  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.679893  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.679930  110577 httplog.go:90] GET /healthz: (940.945µs) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:27.680448  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.534773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.680982  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 10:57:27.694367  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.694400  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.694462  110577 httplog.go:90] GET /healthz: (1.092861ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.698893  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.070479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.720206  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.109196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.720670  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 10:57:27.739185  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.313601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.760244  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.760515  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 10:57:27.779304  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.372541ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.779696  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.779726  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.779760  110577 httplog.go:90] GET /healthz: (906.211µs) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:27.793254  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.793292  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.793383  110577 httplog.go:90] GET /healthz: (1.358426ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.800259  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.301187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.800467  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 10:57:27.819302  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.396116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.840185  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.247657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.840661  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 10:57:27.859464  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.482969ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.881242  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.881278  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.881319  110577 httplog.go:90] GET /healthz: (2.210092ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:27.881920  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.727693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.882240  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 10:57:27.893256  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.893295  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.893337  110577 httplog.go:90] GET /healthz: (1.285962ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.899082  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.215572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.921937  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.841169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.922305  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 10:57:27.939561  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.556363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.960476  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.499845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.960819  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 10:57:27.979333  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.456117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:27.980688  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.980731  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.980786  110577 httplog.go:90] GET /healthz: (1.97006ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:27.993090  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:27.993128  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:27.993174  110577 httplog.go:90] GET /healthz: (1.132833ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.999635  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.778988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:27.999837  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 10:57:28.019692  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.605961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.040475  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.437329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.040773  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 10:57:28.059426  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.412529ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.079883  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.079915  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.079957  110577 httplog.go:90] GET /healthz: (928.142µs) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:28.081166  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.200095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.081407  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 10:57:28.093149  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.093193  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.093245  110577 httplog.go:90] GET /healthz: (1.244438ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.099120  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.297551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.120763  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.808589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.120999  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 10:57:28.139406  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.498997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.160594  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.532172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.160869  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 10:57:28.179171  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.213406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.179638  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.179669  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.179703  110577 httplog.go:90] GET /healthz: (911.773µs) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:28.193616  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.193675  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.193725  110577 httplog.go:90] GET /healthz: (1.670697ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.200335  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.492446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.200611  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 10:57:28.219854  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.917652ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.240334  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.27199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.240682  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 10:57:28.259322  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.380263ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.280919  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.280956  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.281005  110577 httplog.go:90] GET /healthz: (1.266968ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:28.281601  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.248577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.281837  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 10:57:28.292804  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.292838  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.292901  110577 httplog.go:90] GET /healthz: (871.06µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.299382  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.488487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.320719  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.666027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.321238  110577 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 10:57:28.339732  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.733582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.342059  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.82179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.360218  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.19271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.360629  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 10:57:28.379306  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.279857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.379767  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.379794  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.379828  110577 httplog.go:90] GET /healthz: (1.09377ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:28.382570  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.739716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.392841  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.392879  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.392925  110577 httplog.go:90] GET /healthz: (955.576µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.400062  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.133436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.400943  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 10:57:28.422253  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (4.332999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.425245  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.388304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.442905  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.010874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.443243  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 10:57:28.459087  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.123637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.461132  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.60954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.480116  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.480152  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.480190  110577 httplog.go:90] GET /healthz: (1.399079ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:28.480374  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.219654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.480631  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 10:57:28.493150  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.493212  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.493267  110577 httplog.go:90] GET /healthz: (1.266547ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.499063  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.211976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.500883  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.444747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.520486  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.475297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.520966  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 10:57:28.539326  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.381649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.541117  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.211938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.560288  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.372214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.560563  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 10:57:28.579875  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.764954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.580025  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.580054  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.580091  110577 httplog.go:90] GET /healthz: (1.264883ms) 0 [Go-http-client/1.1 127.0.0.1:37458]
I0814 10:57:28.582055  110577 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.830972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.593057  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.593225  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.593291  110577 httplog.go:90] GET /healthz: (1.327012ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.599820  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.851303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.600097  110577 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 10:57:28.619786  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.728179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.622422  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.028234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.645281  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.317028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.645753  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 10:57:28.659332  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.322025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.661631  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.660155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.680052  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.680088  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.680133  110577 httplog.go:90] GET /healthz: (1.30076ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:28.682452  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.351159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.682891  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 10:57:28.693355  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.693687  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.693858  110577 httplog.go:90] GET /healthz: (1.755163ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.699516  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.580669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.701488  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.304525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.720568  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.462873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.720856  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 10:57:28.739573  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.539703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.741873  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.667976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.760420  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.377501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.760951  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 10:57:28.779495  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.459999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.779726  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.779752  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.779790  110577 httplog.go:90] GET /healthz: (1.005068ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:28.781648  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.633974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.793209  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.793469  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.793702  110577 httplog.go:90] GET /healthz: (1.644548ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.800888  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.937798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.801270  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 10:57:28.820040  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.785386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.822392  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.62109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.840437  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.498869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.840742  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 10:57:28.859508  110577 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.475252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.861450  110577 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.369509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.879985  110577 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 10:57:28.880019  110577 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 10:57:28.880057  110577 httplog.go:90] GET /healthz: (1.237498ms) 0 [Go-http-client/1.1 127.0.0.1:37478]
I0814 10:57:28.880379  110577 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.351666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.880620  110577 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 10:57:28.893317  110577 httplog.go:90] GET /healthz: (1.218376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.895120  110577 httplog.go:90] GET /api/v1/namespaces/default: (1.29339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.897517  110577 httplog.go:90] POST /api/v1/namespaces: (1.691932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.898690  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (841.845µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.902471  110577 httplog.go:90] POST /api/v1/namespaces/default/services: (3.419016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.904277  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.030519ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.906441  110577 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.593839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.980276  110577 httplog.go:90] GET /healthz: (1.365322ms) 200 [Go-http-client/1.1 127.0.0.1:37458]
W0814 10:57:28.982293  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982383  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982450  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982473  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982499  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982510  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982522  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.982572  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.983583  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.983714  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 10:57:28.983734  110577 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 10:57:28.983770  110577 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 10:57:28.983782  110577 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 10:57:28.984329  110577 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984360  110577 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984389  110577 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984410  110577 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984756  110577 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984778  110577 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984916  110577 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.984955  110577 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985217  110577 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985239  110577 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985469  110577 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985490  110577 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985583  110577 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985606  110577 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985937  110577 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.985958  110577 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.986199  110577 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.986239  110577 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.986287  110577 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.986307  110577 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.986836  110577 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.986860  110577 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 10:57:28.990169  110577 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (683.678µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.990255  110577 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (772.658µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.990462  110577 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (718.421µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37656]
I0814 10:57:28.991189  110577 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (415.289µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:57:28.991419  110577 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=29472 labels= fields= timeout=6m19s
I0814 10:57:28.991429  110577 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (897.003µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37658]
I0814 10:57:28.991945  110577 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (445.57µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37660]
I0814 10:57:28.992067  110577 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (401.461µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
I0814 10:57:28.992279  110577 get.go:250] Starting watch for /api/v1/pods, rv=29470 labels= fields= timeout=8m12s
I0814 10:57:28.992464  110577 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (389.176µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37664]
I0814 10:57:28.993070  110577 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (476.11µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37666]
I0814 10:57:28.993115  110577 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=29472 labels= fields= timeout=6m10s
I0814 10:57:28.993192  110577 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (417.161µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37668]
I0814 10:57:28.993444  110577 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=29470 labels= fields= timeout=5m23s
I0814 10:57:28.993755  110577 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (456.981µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37670]
I0814 10:57:28.994113  110577 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=29472 labels= fields= timeout=5m25s
I0814 10:57:28.994199  110577 get.go:250] Starting watch for /api/v1/nodes, rv=29470 labels= fields= timeout=8m9s
I0814 10:57:28.994387  110577 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=29471 labels= fields= timeout=7m46s
I0814 10:57:28.994482  110577 get.go:250] Starting watch for /api/v1/services, rv=29682 labels= fields= timeout=5m24s
I0814 10:57:28.994830  110577 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=29472 labels= fields= timeout=9m19s
I0814 10:57:28.994950  110577 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=29470 labels= fields= timeout=5m13s
I0814 10:57:28.994959  110577 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=29470 labels= fields= timeout=9m38s
I0814 10:57:29.084280  110577 shared_informer.go:211] caches populated
I0814 10:57:29.184539  110577 shared_informer.go:211] caches populated
I0814 10:57:29.284765  110577 shared_informer.go:211] caches populated
I0814 10:57:29.384988  110577 shared_informer.go:211] caches populated
I0814 10:57:29.485200  110577 shared_informer.go:211] caches populated
I0814 10:57:29.585426  110577 shared_informer.go:211] caches populated
I0814 10:57:29.685635  110577 shared_informer.go:211] caches populated
I0814 10:57:29.785847  110577 shared_informer.go:211] caches populated
I0814 10:57:29.886346  110577 shared_informer.go:211] caches populated
I0814 10:57:29.986587  110577 shared_informer.go:211] caches populated
I0814 10:57:29.991975  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:29.992616  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:29.992849  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:29.992875  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:29.992979  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:29.993936  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:29.994254  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.086760  110577 shared_informer.go:211] caches populated
I0814 10:57:30.186979  110577 shared_informer.go:211] caches populated
I0814 10:57:30.189453  110577 httplog.go:90] POST /api/v1/nodes: (1.929809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:30.190572  110577 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 10:57:30.192520  110577 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods: (1.85381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:30.193437  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/waiting-pod
I0814 10:57:30.193471  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/waiting-pod
I0814 10:57:30.193654  110577 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/waiting-pod", node "test-node-0"
I0814 10:57:30.193672  110577 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 10:57:30.193726  110577 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 10:57:30.201118  110577 factory.go:615] Attempting to bind signalling-pod to test-node-0
I0814 10:57:30.201574  110577 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 10:57:30.201695  110577 scheduler.go:447] Failed to bind pod: permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod
E0814 10:57:30.201716  110577 scheduler.go:449] scheduler cache ForgetPod failed: pod 756dcde0-0b5a-4e30-89dc-c06ae81cf671 wasn't assumed so cannot be forgotten
E0814 10:57:30.201735  110577 scheduler.go:605] error binding pod: Post http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod/binding: dial tcp 127.0.0.1:36161: connect: connection refused
E0814 10:57:30.201765  110577 factory.go:566] Error scheduling permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod: Post http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod/binding: dial tcp 127.0.0.1:36161: connect: connection refused; retrying
I0814 10:57:30.201805  110577 factory.go:624] Updating pod condition for permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 10:57:30.202675  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
E0814 10:57:30.202747  110577 scheduler.go:280] Error updating the condition of the pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod: Put http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod/status: dial tcp 127.0.0.1:36161: connect: connection refused
E0814 10:57:30.202783  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36161/apis/events.k8s.io/v1beta1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/events: dial tcp 127.0.0.1:36161: connect: connection refused' (may retry after sleeping)
I0814 10:57:30.204818  110577 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/waiting-pod/binding: (3.01259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:30.205050  110577 scheduler.go:614] pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 10:57:30.207497  110577 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/events: (2.126021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
E0814 10:57:30.403248  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
E0814 10:57:30.803951  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
I0814 10:57:30.992183  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.992851  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.993004  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.993016  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.993116  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.994083  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:30.994387  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:31.604733  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
I0814 10:57:31.992423  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:31.993000  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:31.993180  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:31.993181  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:31.993230  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:31.994230  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:31.994631  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.992625  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.993156  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.993331  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.993352  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.993367  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.994454  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:32.994731  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:33.205353  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
I0814 10:57:33.993069  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:33.993590  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:33.993617  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:33.993595  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:33.993647  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:33.994625  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:33.994884  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.993277  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.993761  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.993778  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.993763  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.993786  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.994834  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:34.995044  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:35.641364  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36377/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/events: dial tcp 127.0.0.1:36377: connect: connection refused' (may retry after sleeping)
I0814 10:57:35.993506  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:35.993938  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:35.993962  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:35.993988  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:35.994002  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:35.994971  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:35.995174  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:36.406021  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
I0814 10:57:36.993731  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:36.994153  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:36.994224  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:36.994238  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:36.994254  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:36.995113  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:36.995304  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.993938  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.994385  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.994471  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.994517  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.994551  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.995256  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:37.995474  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:38.305821  110577 factory.go:599] Error getting pod permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/test-pod for retry: Get http://127.0.0.1:36377/api/v1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/pods/test-pod: dial tcp 127.0.0.1:36377: connect: connection refused; retrying...
I0814 10:57:38.895480  110577 httplog.go:90] GET /api/v1/namespaces/default: (1.341532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:38.897101  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.149415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:38.898325  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (873.481µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:38.994364  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:38.994626  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:38.994661  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:38.994695  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:38.994696  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:38.995540  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:38.995552  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.994613  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.994844  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.994861  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.994911  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.994928  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.995724  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:39.995730  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:40.273324  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36161/apis/events.k8s.io/v1beta1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/events: dial tcp 127.0.0.1:36161: connect: connection refused' (may retry after sleeping)
I0814 10:57:40.994814  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:40.994963  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:40.995092  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:40.995110  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:40.995150  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:40.995871  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:40.995882  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.995013  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.995102  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.995211  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.995231  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.995245  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.996003  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:41.996051  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:42.807015  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
I0814 10:57:42.995223  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:42.995358  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:42.995374  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:42.995392  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:42.996209  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:42.996229  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:42.996251  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.995432  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.995544  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.995554  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.995573  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.996374  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.996374  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:43.996380  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.995627  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.995705  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.995706  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.995956  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.996595  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.996600  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:44.996625  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:45.682066  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36377/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/events: dial tcp 127.0.0.1:36377: connect: connection refused' (may retry after sleeping)
I0814 10:57:45.995854  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:45.995867  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:45.995867  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:45.996109  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:45.996749  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:45.996760  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:45.996763  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.996034  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.996077  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.996160  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.996332  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.996922  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.996955  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:46.997475  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.996131  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.996191  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.996332  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.997683  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.997767  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.997810  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:47.997827  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.895717  110577 httplog.go:90] GET /api/v1/namespaces/default: (1.461871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:48.897379  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.236414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:48.899056  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.183817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:48.996518  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.996662  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.996685  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.997850  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.997955  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.998108  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:48.998129  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.996756  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.996819  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.996935  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.998098  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.998322  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.998340  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:49.998073  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:50.802746  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36161/apis/events.k8s.io/v1beta1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/events: dial tcp 127.0.0.1:36161: connect: connection refused' (may retry after sleeping)
I0814 10:57:50.996964  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:50.996975  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:50.997071  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:50.998296  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:50.998469  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:50.998500  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:50.998746  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.997146  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.997147  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.997358  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.998461  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.998558  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.998590  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:51.998898  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.997246  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.997314  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.997943  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.998637  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.998702  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.998721  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:52.999589  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.997477  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.997607  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.998550  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.998746  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.998834  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.998852  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:53.999670  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.997739  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.997849  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.998660  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.998992  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.999030  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.999145  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:54.999852  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 10:57:55.607700  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
E0814 10:57:55.702406  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36377/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/events: dial tcp 127.0.0.1:36377: connect: connection refused' (may retry after sleeping)
I0814 10:57:55.997927  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:55.997975  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:55.998850  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:55.999170  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:55.999196  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:55.999231  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.000065  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.998124  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.998171  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.999020  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.999282  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.999283  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:56.999376  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.000407  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.998334  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.998467  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.999206  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.999389  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.999426  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:57.999553  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.000577  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.896286  110577 httplog.go:90] GET /api/v1/namespaces/default: (1.854418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:58.898219  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.454404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:58.899591  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (992.422µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:57:58.998572  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.998567  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.999366  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.999564  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.999591  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:58.999751  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.000736  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.998768  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.998768  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.999557  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.999734  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.999757  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:57:59.999882  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:00.000921  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:00.196230  110577 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods: (2.569854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:58:00.196709  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:00.196734  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:00.196880  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:00.196949  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:00.199235  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.992955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:00.199355  110577 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod/status: (2.116029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:58:00.200428  110577 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/events: (1.509022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.200628  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (910.606µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37662]
I0814 10:58:00.200916  110577 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 10:58:00.203601  110577 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod/status: (2.264483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.206277  110577 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/waiting-pod: (2.258033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.207970  110577 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/events: (1.16769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.300695  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.982483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.402721  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.869526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.498589  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.622399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.599047  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.08647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.698863  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.907417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.798914  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.913709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.898806  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.731534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.999064  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:00.999131  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:00.999355  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.35141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:00.999848  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:00.999861  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:01.000039  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:01.005506  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:01.005506  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:01.098986  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.932659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.198856  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.886392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.299089  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.060768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.399066  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.011051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.498717  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.720186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.598976  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.920377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.698736  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.739063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.799053  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.869857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.898944  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.910054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.987122  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:01.987167  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:01.987359  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:01.987433  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:01.989920  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.096846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.989925  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.644187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:01.990350  110577 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/events: (2.036195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41496]
I0814 10:58:01.999009  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.05627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:01.999191  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:01.999286  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:01.999978  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.000071  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.000173  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.005702  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.005718  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.099054  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.058926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.198881  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.924788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
E0814 10:58:02.229633  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36161/apis/events.k8s.io/v1beta1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/events: dial tcp 127.0.0.1:36161: connect: connection refused' (may retry after sleeping)
I0814 10:58:02.299111  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.111703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.398772  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.801001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.498848  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.87598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.598935  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.889005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.698829  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.818287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.798817  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.832859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.900026  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.149727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.998878  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.887324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:02.999342  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.999473  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:02.999630  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:02.999645  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:02.999778  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:02.999817  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:03.000081  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.000193  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.000299  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.003112  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.038105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:03.003922  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.819421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41440]
I0814 10:58:03.004597  110577 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/events/preemptor-pod.15bac4dee02b7399: (3.494959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.005851  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.005855  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.098846  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.86977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.199028  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.968251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.299441  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.389306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.399029  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.98378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.499153  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.114757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.599223  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.156455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.698927  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.963126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.799087  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.025836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.899298  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.214227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
E0814 10:58:03.906594  110577 factory.go:599] Error getting pod permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/test-pod for retry: Get http://127.0.0.1:36377/api/v1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/pods/test-pod: dial tcp 127.0.0.1:36377: connect: connection refused; retrying...
I0814 10:58:03.998883  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.920996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:03.999518  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.999612  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:03.999725  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:03.999735  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:03.999863  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:03.999905  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:04.000457  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.000498  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.000624  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.001787  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.553018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:04.001896  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.697763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.006042  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.006043  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.098807  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.777212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.198775  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.760379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.299024  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.909688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.398800  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.789642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.498802  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.81142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.598662  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.659443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.698712  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.756059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.798873  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.884043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.899109  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.10088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.998722  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.69841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:04.999699  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.999771  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:04.999976  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:05.000002  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:05.000160  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:05.000210  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:05.000630  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:05.000678  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:05.000910  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:05.002072  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.641154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.002072  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.58804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:05.006675  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:05.006675  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:05.099009  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.023929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.198765  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.730821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.298611  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.613763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.398940  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.044112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.498674  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.645304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.598521  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.545407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.698492  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.548268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.798501  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.524674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.898649  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.658628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.998578  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.599419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:05.999844  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:05.999951  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:06.000098  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:06.000119  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:06.000279  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:06.000335  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:06.000778  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:06.000804  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:06.001049  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:06.002113  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.36201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.002125  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.415464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:06.006848  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:06.006964  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:06.098517  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.541296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.198847  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.839958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
E0814 10:58:06.261147  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36377/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/events: dial tcp 127.0.0.1:36377: connect: connection refused' (may retry after sleeping)
I0814 10:58:06.298494  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.55631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.399593  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.892985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.498483  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.505861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.601191  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.799257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.700004  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.056505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.799305  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.198348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.900643  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.524609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:06.998490  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.502469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.000058  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.000149  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.000186  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:07.000195  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:07.000329  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:07.000363  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:07.000951  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.000992  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.001652  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.003449  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.990975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.003826  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.661351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:07.007036  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.007069  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:07.098736  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.700648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.198675  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.644438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.298927  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.871919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.398771  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.747194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.498882  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.908413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.599846  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.144404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.699576  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.570238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.798898  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.787801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.898649  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.595841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:07.998753  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.605435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:08.000238  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.000241  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.000400  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:08.000422  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:08.000568  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:08.000623  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:08.001319  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.001393  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.001827  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.002483  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.633847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:08.003230  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.23211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.007225  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.007359  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:08.104075  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.951271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.199030  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.059013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.298725  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.646344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.398742  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.691463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.498814  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.924863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.598931  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.91605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.699080  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.090591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.808695  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (11.745275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.897401  110577 httplog.go:90] GET /api/v1/namespaces/default: (2.770832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.898635  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.754645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:08.899410  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.562636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.902327  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.996076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:08.999185  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.220184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.000397  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.000405  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.000598  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:09.000623  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:09.000784  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:09.000834  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:09.002390  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.002944  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.003188  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.003826  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.444162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:09.004183  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.157131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.007408  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.007560  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:09.106470  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (9.502159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.198460  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.459967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.304107  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (7.139446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.398692  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.667095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.499201  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.150093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.598913  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.928423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.698790  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.832981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.799790  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.772414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.898958  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.886455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:09.998719  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.757354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.000605  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.000605  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.000798  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:10.000828  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:10.000988  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:10.001038  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:10.002585  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.003305  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.741153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:10.003626  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.003966  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.004768  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.108019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.007609  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.007695  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:10.098694  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.803398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.198893  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.847494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.298940  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.965694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.398490  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.490511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.498817  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.825733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.599800  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.961107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.698568  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.529158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.798838  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.702747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.898810  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.81599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:10.998309  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.373538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.000731  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.000833  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.000863  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:11.000876  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:11.001007  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:11.001040  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:11.002717  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.003789  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.004465  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.279712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.004719  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.037939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:11.005138  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.007819  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.007915  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:11.100198  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.22541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.198692  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.659433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.298996  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.992073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.398929  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.894941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.498745  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.810893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.598578  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.574754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.698712  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.757695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.798922  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.883358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.899188  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.086852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:11.998753  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.721464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:12.000898  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.001062  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:12.001077  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:12.001247  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:12.001296  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:12.002727  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.002846  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.004373  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.981057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:12.004800  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.005301  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.006033  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (4.19839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.007979  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.008097  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:12.099040  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.049272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.199105  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.018915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.299498  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.369369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.398967  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.845386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.498978  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.953432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.598636  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.66945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.698900  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.866642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.799003  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.034948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.898891  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.831914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:12.998766  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.772991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.001042  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.001229  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:13.001252  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:13.001409  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:13.001464  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:13.003177  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.003213  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.348018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:13.003260  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.004046  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.240091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.004933  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.005472  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.008186  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.008274  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:13.098761  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.738332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.200112  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.123313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.298578  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.57924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.398997  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.972016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.499088  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.112724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.598944  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.958652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.699894  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.821134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.800059  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.519649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.899115  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.125706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:13.999836  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.937642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.001306  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.001418  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:14.001429  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:14.001637  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:14.001691  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:14.003389  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.003774  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.786573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:14.003903  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.005165  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.006235  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.008373  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.008407  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:14.011988  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (10.057652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.100224  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.752842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.199938  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.553096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
E0814 10:58:14.224595  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36161/apis/events.k8s.io/v1beta1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/events: dial tcp 127.0.0.1:36161: connect: connection refused' (may retry after sleeping)
I0814 10:58:14.298911  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.939373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.398720  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.670781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.498929  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.872045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.598472  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.561532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.698871  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.878046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.799669  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.7178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.899270  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.203291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:14.998905  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.848648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:15.001493  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.001678  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:15.001707  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:15.001881  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:15.001971  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:15.003563  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.003848  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.485398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.003856  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.650107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:15.004176  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.005344  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.006395  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.008551  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.008550  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:15.098642  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.700079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.198979  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.81291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.299095  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.103749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.399123  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.121859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.498893  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.947943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.598913  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.884299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.698818  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.818535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.798754  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.765253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.899239  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.149663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:15.998961  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.904501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:16.001662  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.001817  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:16.001840  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:16.002014  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:16.002077  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:16.003742  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.003977  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.46402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:16.004100  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.577339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.004274  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.005491  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.006524  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.008706  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.008724  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:16.099075  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.072493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.199015  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.928025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.299212  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.206177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.399096  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.986713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.499187  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.159237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.599088  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.993611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.698832  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.805274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.799181  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.144669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.899088  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.160263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:16.998979  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.010515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:17.001830  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.001971  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:17.001981  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:17.002180  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:17.002241  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:17.003927  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.004134  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.556624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:17.004267  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.683396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.004472  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.005712  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.006719  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.008936  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.008960  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:17.099014  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.986381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.198989  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.757467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.299501  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.038068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.399156  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.121497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.501212  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.732734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
E0814 10:58:17.580333  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36377/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/events: dial tcp 127.0.0.1:36377: connect: connection refused' (may retry after sleeping)
I0814 10:58:17.598421  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.529417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.698978  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.895574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.798742  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.681048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.898953  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.927799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:17.998896  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.919275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:18.002018  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.002151  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:18.002163  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:18.002362  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:18.002420  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:18.004573  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.733381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.004582  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.801729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:18.004627  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.004669  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.005947  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.006876  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.009179  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.009227  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:18.098915  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.902554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.198944  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.904094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.298673  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.678985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.398865  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.895758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.498688  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.733956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.598742  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.714578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.699214  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.793253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.798863  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.874419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.896503  110577 httplog.go:90] GET /api/v1/namespaces/default: (1.685848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.898349  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.313476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.899389  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.555021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:18.899931  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.125216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:18.998823  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.810632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.002216  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.002430  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:19.002455  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:19.002629  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:19.002697  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:19.004728  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.647529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:19.004755  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.004755  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.005230  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.650775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.006127  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.007034  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.009329  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.009356  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:19.100025  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.316985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.198983  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.940552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.299628  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.672739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.398663  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.661284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.498608  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.629276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.598785  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.715492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.698752  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.696294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.798767  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.77477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.898855  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.782789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:19.998510  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.565686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.002411  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.002589  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:20.002612  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:20.002769  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:20.002847  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:20.004644  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.49401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:20.004805  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.660053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.005152  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.005184  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.006306  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.007182  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.009516  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.009611  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:20.098413  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.488931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.199173  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.098404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.299230  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.122008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.399022  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.909081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.498990  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.985671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.599182  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.208441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.698808  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.832775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.798775  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.71941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.898926  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.809662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:20.999110  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.132692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:21.002625  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.002801  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:21.002820  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:21.003029  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:21.003079  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:21.005288  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.005320  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.006119  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.967922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:21.006771  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.63886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.006942  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.007348  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.009688  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.009714  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:21.098636  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.666626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.199929  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.84774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
E0814 10:58:21.208261  110577 factory.go:599] Error getting pod permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/signalling-pod for retry: Get http://127.0.0.1:36161/api/v1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/pods/signalling-pod: dial tcp 127.0.0.1:36161: connect: connection refused; retrying...
I0814 10:58:21.299070  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.004617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.399128  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.058432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.499459  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.394939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.599210  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.160819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.699023  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.060958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.798964  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.890264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.899020  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.997458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:21.999103  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.031245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.002892  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.003125  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:22.003153  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:22.003354  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:22.003416  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:22.005690  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.833952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:22.005747  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.005775  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.005695  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.753773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.007154  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.007504  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.009836  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.009904  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:22.099006  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.98423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.198849  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.783181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.298761  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.843644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.398945  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.855789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.499050  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.11157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.598959  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.896316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.698779  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.846664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.799621  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.765241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.898959  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.977365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:22.998979  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.937504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:23.003113  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.003284  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:23.003302  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:23.003461  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:23.003508  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:23.005674  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.860131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:23.005951  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.005993  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.006244  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.272775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.007309  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.007661  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.009994  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.010024  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:23.098901  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.897186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.198970  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.956681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.298935  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.862538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.399184  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.151846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.498747  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.761029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.598874  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.844579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.698749  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.749194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.799219  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.206284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.899005  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.045223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:23.998863  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.883931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:24.003328  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.003596  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:24.003614  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:24.003761  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:24.003813  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:24.006517  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.538819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:24.006660  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.388567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.007110  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.007142  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.007455  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.007812  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.010171  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.010175  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:24.099206  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.172729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.199086  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.07922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.299049  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.011384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.398775  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.775271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.498981  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.001514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.599357  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.352785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.698694  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.700549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.798900  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.894015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.898777  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.735428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:24.998845  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.853459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.003455  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.003676  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:25.003702  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:25.003908  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:25.003984  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:25.007166  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.510491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:25.007850  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.535578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.007971  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.008470  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.008477  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.008498  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.010374  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.010422  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:25.099073  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.960077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.199060  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.021922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
E0814 10:58:25.232009  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36161/apis/events.k8s.io/v1beta1/namespaces/permit-plugin0d2b0f26-aeee-48e9-b6fe-ddee8977cf70/events: dial tcp 127.0.0.1:36161: connect: connection refused' (may retry after sleeping)
I0814 10:58:25.298948  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.033965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.398805  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.802176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.499181  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.140994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.599995  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.293165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.603156  110577 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.478556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.605047  110577 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.597619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.607287  110577 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.610074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.698997  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.035071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.798719  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.724874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.899460  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.445302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:25.999403  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.409103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:26.003670  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.003887  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:26.003929  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:26.004098  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:26.004159  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:26.006709  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.789357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:26.007586  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.817682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.008176  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.008657  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.008687  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.008705  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.010602  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.010722  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:26.099263  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.223047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.198872  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.840822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.299165  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.07979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.399040  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.022115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.498917  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.937536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.599296  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.240914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.699251  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.118141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.799157  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.075303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.899441  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.427153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:26.998705  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.728786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.003948  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.004139  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:27.004159  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:27.004357  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:27.004417  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:27.007776  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.728438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:27.008617  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.904591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.009092  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.009093  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.009130  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.009159  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.010744  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.010904  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:27.099232  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.199067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.199069  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.087027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.299366  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.007207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.398978  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.976471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.499144  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.154456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.598943  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.928566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.699102  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.10006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.798636  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.689214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.899038  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.073973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.991981  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:27.992016  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:27.992262  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:27.992315  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:27.995780  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.08853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:27.996452  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.740454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:27.998399  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.43172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.004161  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.009269  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.009309  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.009321  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.009331  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.010958  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.011026  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:28.099316  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.268062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.198891  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.842855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.298574  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.545222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.398958  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.887689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.498738  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.72344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
E0814 10:58:28.500417  110577 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:36377/apis/events.k8s.io/v1beta1/namespaces/permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/events: dial tcp 127.0.0.1:36377: connect: connection refused' (may retry after sleeping)
I0814 10:58:28.598674  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.57831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.699008  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.971368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.798991  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.978286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.896661  110577 httplog.go:90] GET /api/v1/namespaces/default: (1.779081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.898362  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.537357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:28.898388  110577 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.297606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:28.900144  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.176696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:28.998756  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.762508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.004367  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.004577  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:29.004604  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:29.004757  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:29.004821  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:29.007140  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.278252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.007456  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (2.19606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:29.009405  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.009444  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.009454  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.009479  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.011118  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.011163  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:29.098980  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.969465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.198776  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.795348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.298937  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.867182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.398739  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.753623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.500235  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.250626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.600917  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.903077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.700289  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.240238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.801523  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (4.562305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:29.904224  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (7.160265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:30.001135  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.783043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:30.004862  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.005096  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:30.005120  110577 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:30.005268  110577 factory.go:550] Unable to schedule preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 10:58:30.005339  110577 factory.go:624] Updating pod condition for preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 10:58:30.009611  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.009699  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.009724  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.009737  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.011276  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.011308  110577 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 10:58:30.016643  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (9.629217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:30.019726  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (13.38798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.098998  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.81511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.198729  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.760838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.200998  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (1.778434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.204565  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/waiting-pod: (3.009329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.212749  110577 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/waiting-pod: (7.711185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.227142  110577 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:30.227186  110577 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/preemptor-pod
I0814 10:58:30.230402  110577 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/events: (2.775924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41778]
I0814 10:58:30.230778  110577 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (17.612461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.234222  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/waiting-pod: (1.831102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.240255  110577 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf801686f-f9bb-4dc0-bfe3-5e81661c29bb/pods/preemptor-pod: (3.400071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.241604  110577 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=29470&timeout=5m23s&timeoutSeconds=323&watch=true: (1m1.248425398s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37660]
I0814 10:58:30.241624  110577 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29472&timeout=5m25s&timeoutSeconds=325&watch=true: (1m1.247764866s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37656]
I0814 10:58:30.241762  110577 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29682&timeout=5m24s&timeoutSeconds=324&watch=true: (1m1.247514684s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37658]
I0814 10:58:30.241797  110577 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=29471&timeout=7m46s&timeoutSeconds=466&watch=true: (1m1.247638451s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37666]
I0814 10:58:30.241878  110577 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=29472&timeout=9m19s&timeoutSeconds=559&watch=true: (1m1.247386916s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37670]
I0814 10:58:30.241917  110577 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=29470&timeout=5m13s&timeoutSeconds=313&watch=true: (1m1.247262267s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37668]
I0814 10:58:30.241981  110577 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=29470&timeout=9m38s&timeoutSeconds=578&watch=true: (1m1.247254994s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37478]
I0814 10:58:30.242031  110577 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29470&timeout=8m9s&timeoutSeconds=489&watch=true: (1m1.248389197s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37664]
I0814 10:58:30.242081  110577 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=29470&timeout=8m12s&timeoutSeconds=492&watch=true: (1m1.250078093s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37680]
I0814 10:58:30.242424  110577 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=29472&timeout=6m19s&timeoutSeconds=379&watch=true: (1m1.251324452s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37674]
I0814 10:58:30.242564  110577 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=29472&timeout=6m10s&timeoutSeconds=370&watch=true: (1m1.249679763s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37458]
E0814 10:58:30.242665  110577 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 10:58:30.255326  110577 httplog.go:90] DELETE /api/v1/nodes: (13.200983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.255550  110577 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 10:58:30.257070  110577 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.282298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
I0814 10:58:30.259791  110577 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.008909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37676]
--- FAIL: TestPreemptWithPermitPlugin (64.84s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-105022.xml

Find permit-plugin8df58a24-47f8-48b8-9c52-d83f032416cd/test-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 693 lines ...
W0814 10:45:20.043] W0814 10:45:20.042278   53084 controllermanager.go:555] "serviceaccount-token" is disabled because there is no private key
W0814 10:45:20.043] I0814 10:45:20.043410   53084 controllermanager.go:535] Started "podgc"
W0814 10:45:20.043] W0814 10:45:20.043658   53084 controllermanager.go:527] Skipping "nodeipam"
W0814 10:45:20.044] I0814 10:45:20.043605   53084 gc_controller.go:76] Starting GC controller
W0814 10:45:20.044] I0814 10:45:20.044319   53084 controller_utils.go:1029] Waiting for caches to sync for GC controller
W0814 10:45:20.045] I0814 10:45:20.044567   53084 node_lifecycle_controller.go:77] Sending events to api server
W0814 10:45:20.045] E0814 10:45:20.044724   53084 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 10:45:20.045] W0814 10:45:20.044914   53084 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 10:45:20.045] W0814 10:45:20.044943   53084 controllermanager.go:527] Skipping "root-ca-cert-publisher"
W0814 10:45:20.198] I0814 10:45:20.198121   53084 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W0814 10:45:20.199] I0814 10:45:20.198230   53084 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W0814 10:45:20.199] I0814 10:45:20.198387   53084 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
W0814 10:45:20.199] I0814 10:45:20.198427   53084 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
... skipping 47 lines ...
W0814 10:45:20.209] I0814 10:45:20.204265   53084 controller_utils.go:1029] Waiting for caches to sync for job controller
W0814 10:45:20.209] I0814 10:45:20.204551   53084 controllermanager.go:535] Started "statefulset"
W0814 10:45:20.209] W0814 10:45:20.204575   53084 controllermanager.go:514] "bootstrapsigner" is disabled
W0814 10:45:20.210] W0814 10:45:20.204580   53084 controllermanager.go:514] "tokencleaner" is disabled
W0814 10:45:20.210] I0814 10:45:20.204613   53084 stateful_set.go:145] Starting stateful set controller
W0814 10:45:20.210] I0814 10:45:20.204626   53084 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
W0814 10:45:20.210] E0814 10:45:20.205110   53084 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 10:45:20.211] W0814 10:45:20.205146   53084 controllermanager.go:527] Skipping "service"
W0814 10:45:20.211] I0814 10:45:20.205418   53084 controllermanager.go:535] Started "pv-protection"
W0814 10:45:20.211] W0814 10:45:20.205445   53084 controllermanager.go:527] Skipping "ttl-after-finished"
W0814 10:45:20.211] I0814 10:45:20.205456   53084 pv_protection_controller.go:82] Starting PV protection controller
W0814 10:45:20.211] I0814 10:45:20.205478   53084 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
W0814 10:45:20.212] I0814 10:45:20.205891   53084 controllermanager.go:535] Started "replicationcontroller"
... skipping 41 lines ...
I0814 10:45:20.545] Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.0-alpha.0.101+eeec166cfb48cc", GitCommit:"eeec166cfb48cc8a46efa5c91f2b93d4007e7098", GitTreeState:"clean", BuildDate:"2019-08-14T10:43:20Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I0814 10:45:20.546] Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.0-alpha.0.101+eeec166cfb48cc", GitCommit:"eeec166cfb48cc8a46efa5c91f2b93d4007e7098", GitTreeState:"clean", BuildDate:"2019-08-14T10:43:42Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
W0814 10:45:20.646] I0814 10:45:20.531298   53084 garbagecollector.go:129] Starting garbage collector controller
W0814 10:45:20.646] I0814 10:45:20.531574   53084 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 10:45:20.647] I0814 10:45:20.531334   53084 controllermanager.go:535] Started "garbagecollector"
W0814 10:45:20.647] I0814 10:45:20.531651   53084 graph_builder.go:282] GraphBuilder running
W0814 10:45:20.647] W0814 10:45:20.560562   53084 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 10:45:20.647] I0814 10:45:20.600743   53084 controller_utils.go:1036] Caches are synced for HPA controller
W0814 10:45:20.647] I0814 10:45:20.604499   53084 controller_utils.go:1036] Caches are synced for job controller
W0814 10:45:20.648] I0814 10:45:20.606341   53084 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0814 10:45:20.648] I0814 10:45:20.606732   53084 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0814 10:45:20.648] I0814 10:45:20.608210   53084 controller_utils.go:1036] Caches are synced for certificate controller
W0814 10:45:20.648] I0814 10:45:20.614206   53084 controller_utils.go:1036] Caches are synced for TTL controller
... skipping 106 lines ...
I0814 10:45:24.554] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:45:24.556] +++ command: run_RESTMapper_evaluation_tests
I0814 10:45:24.569] +++ [0814 10:45:24] Creating namespace namespace-1565779524-7520
I0814 10:45:24.640] namespace/namespace-1565779524-7520 created
I0814 10:45:24.711] Context "test" modified.
I0814 10:45:24.717] +++ [0814 10:45:24] Testing RESTMapper
I0814 10:45:24.830] +++ [0814 10:45:24] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 10:45:24.843] +++ exit code: 0
I0814 10:45:24.957] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 10:45:24.958] bindings                                                                      true         Binding
I0814 10:45:24.958] componentstatuses                 cs                                          false        ComponentStatus
I0814 10:45:24.958] configmaps                        cm                                          true         ConfigMap
I0814 10:45:24.959] endpoints                         ep                                          true         Endpoints
... skipping 664 lines ...
I0814 10:45:44.164] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 10:45:44.266] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 10:45:44.357] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 10:45:44.453] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 10:45:44.620] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:45:44.808] (Bpod/env-test-pod created
W0814 10:45:44.908] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 10:45:44.909] error: setting 'all' parameter but found a non empty selector. 
W0814 10:45:44.909] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 10:45:44.909] I0814 10:45:43.810732   49631 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0814 10:45:44.909] error: min-available and max-unavailable cannot be both specified
I0814 10:45:45.010] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 10:45:45.010] Name:         env-test-pod
I0814 10:45:45.010] Namespace:    test-kubectl-describe-pod
I0814 10:45:45.010] Priority:     0
I0814 10:45:45.010] Node:         <none>
I0814 10:45:45.010] Labels:       <none>
... skipping 173 lines ...
I0814 10:45:58.243] (Bpod/valid-pod patched
I0814 10:45:58.336] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 10:45:58.412] (Bpod/valid-pod patched
I0814 10:45:58.502] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 10:45:58.662] (Bpod/valid-pod patched
I0814 10:45:58.760] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 10:45:58.934] (B+++ [0814 10:45:58] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 10:45:59.172] pod "valid-pod" deleted
I0814 10:45:59.185] pod/valid-pod replaced
I0814 10:45:59.278] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 10:45:59.434] (BSuccessful
I0814 10:45:59.435] message:error: --grace-period must have --force specified
I0814 10:45:59.435] has:\-\-grace-period must have \-\-force specified
I0814 10:45:59.588] Successful
I0814 10:45:59.588] message:error: --timeout must have --force specified
I0814 10:45:59.588] has:\-\-timeout must have \-\-force specified
I0814 10:45:59.742] node/node-v1-test created
W0814 10:45:59.843] W0814 10:45:59.742319   53084 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 10:45:59.944] node/node-v1-test replaced
I0814 10:45:59.989] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 10:46:00.068] (Bnode "node-v1-test" deleted
I0814 10:46:00.162] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 10:46:00.440] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0814 10:46:01.381] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 66 lines ...
I0814 10:46:05.406] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:05.563] (Bpod/test-pod created
W0814 10:46:05.664] Edit cancelled, no changes made.
W0814 10:46:05.665] Edit cancelled, no changes made.
W0814 10:46:05.665] Edit cancelled, no changes made.
W0814 10:46:05.665] Edit cancelled, no changes made.
W0814 10:46:05.665] error: 'name' already has a value (valid-pod), and --overwrite is false
W0814 10:46:05.665] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 10:46:05.666] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 10:46:05.766] pod "test-pod" deleted
I0814 10:46:05.767] +++ [0814 10:46:05] Creating namespace namespace-1565779565-19508
I0814 10:46:05.814] namespace/namespace-1565779565-19508 created
I0814 10:46:05.888] Context "test" modified.
... skipping 41 lines ...
I0814 10:46:08.990] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 10:46:08.992] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:46:08.994] +++ command: run_kubectl_create_error_tests
I0814 10:46:09.005] +++ [0814 10:46:09] Creating namespace namespace-1565779569-13853
I0814 10:46:09.082] namespace/namespace-1565779569-13853 created
I0814 10:46:09.160] Context "test" modified.
I0814 10:46:09.165] +++ [0814 10:46:09] Testing kubectl create with error
W0814 10:46:09.266] Error: must specify one of -f and -k
W0814 10:46:09.266] 
W0814 10:46:09.267] Create a resource from a file or from stdin.
W0814 10:46:09.267] 
W0814 10:46:09.267]  JSON and YAML formats are accepted.
W0814 10:46:09.267] 
W0814 10:46:09.267] Examples:
... skipping 41 lines ...
W0814 10:46:09.273] 
W0814 10:46:09.273] Usage:
W0814 10:46:09.273]   kubectl create -f FILENAME [options]
W0814 10:46:09.273] 
W0814 10:46:09.273] Use "kubectl <command> --help" for more information about a given command.
W0814 10:46:09.273] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 10:46:09.381] +++ [0814 10:46:09] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 10:46:09.481] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 10:46:09.482] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 10:46:09.582] +++ exit code: 0
I0814 10:46:09.583] Recording: run_kubectl_apply_tests
I0814 10:46:09.583] Running command: run_kubectl_apply_tests
I0814 10:46:09.599] 
... skipping 19 lines ...
W0814 10:46:11.595] I0814 10:46:11.595269   49631 client.go:354] parsed scheme: ""
W0814 10:46:11.596] I0814 10:46:11.595305   49631 client.go:354] scheme "" not registered, fallback to default scheme
W0814 10:46:11.596] I0814 10:46:11.595340   49631 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 10:46:11.597] I0814 10:46:11.595379   49631 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 10:46:11.597] I0814 10:46:11.596689   49631 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 10:46:11.599] I0814 10:46:11.599129   49631 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0814 10:46:11.687] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 10:46:11.788] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0814 10:46:11.788] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 10:46:11.801] +++ exit code: 0
I0814 10:46:11.834] Recording: run_kubectl_run_tests
I0814 10:46:11.834] Running command: run_kubectl_run_tests
I0814 10:46:11.855] 
... skipping 87 lines ...
I0814 10:46:14.314] Context "test" modified.
I0814 10:46:14.321] +++ [0814 10:46:14] Testing kubectl create filter
I0814 10:46:14.411] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:14.602] (Bpod/selector-test-pod created
I0814 10:46:14.698] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 10:46:14.780] (BSuccessful
I0814 10:46:14.780] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 10:46:14.780] has:pods "selector-test-pod-dont-apply" not found
I0814 10:46:14.859] pod "selector-test-pod" deleted
I0814 10:46:14.876] +++ exit code: 0
I0814 10:46:14.910] Recording: run_kubectl_apply_deployments_tests
I0814 10:46:14.911] Running command: run_kubectl_apply_deployments_tests
I0814 10:46:14.928] 
... skipping 28 lines ...
W0814 10:46:16.555] I0814 10:46:13.164070   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779571-9168", Name:"nginx-apps", UID:"0fe21ce8-dc14-42f4-895e-67ccd85978dc", APIVersion:"apps/v1", ResourceVersion:"524", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-apps-59949c48c to 1
W0814 10:46:16.556] I0814 10:46:13.169053   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779571-9168", Name:"nginx-apps-59949c48c", UID:"bb27afc9-06eb-41da-81de-6019a450a13a", APIVersion:"apps/v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-apps-59949c48c-qct8v
W0814 10:46:16.556] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 10:46:16.556] I0814 10:46:13.596780   49631 controller.go:606] quota admission added evaluator for: cronjobs.batch
W0814 10:46:16.557] I0814 10:46:15.535148   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779574-6826", Name:"my-depl", UID:"efdb73cd-a886-4648-a4dd-7662d9cf1473", APIVersion:"apps/v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-67dc88cf84 to 1
W0814 10:46:16.557] I0814 10:46:15.543162   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"my-depl-67dc88cf84", UID:"40a7bd4c-d0df-4ade-9973-b9de58b6b473", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-67dc88cf84-zpgrb
W0814 10:46:16.558] E0814 10:46:16.467127   53084 replica_set.go:450] Sync "namespace-1565779574-6826/my-depl-67dc88cf84" failed with replicasets.apps "my-depl-67dc88cf84" not found
I0814 10:46:16.659] apps.sh:138: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:16.659] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:16.742] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:16.833] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:16.999] (Bdeployment.apps/nginx created
W0814 10:46:17.100] I0814 10:46:17.004828   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779574-6826", Name:"nginx", UID:"7661b097-e41c-4318-899f-467eb31d394e", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 10:46:17.101] I0814 10:46:17.008659   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"nginx-7dbc4d9f", UID:"c5e4aa2a-da8c-42c7-b7e4-87b3e1c86769", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-96ttc
W0814 10:46:17.102] I0814 10:46:17.012359   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"nginx-7dbc4d9f", UID:"c5e4aa2a-da8c-42c7-b7e4-87b3e1c86769", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-tztqg
W0814 10:46:17.103] I0814 10:46:17.013355   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"nginx-7dbc4d9f", UID:"c5e4aa2a-da8c-42c7-b7e4-87b3e1c86769", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-nlnmq
I0814 10:46:17.203] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 10:46:21.341] (BSuccessful
I0814 10:46:21.341] message:Error from server (Conflict): error when applying patch:
I0814 10:46:21.342] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565779574-6826\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 10:46:21.342] to:
I0814 10:46:21.342] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 10:46:21.342] Name: "nginx", Namespace: "namespace-1565779574-6826"
I0814 10:46:21.345] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565779574-6826\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T10:46:16Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T10:46:16Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T10:46:17Z"]] "name":"nginx" "namespace":"namespace-1565779574-6826" "resourceVersion":"588" "selfLink":"/apis/apps/v1/namespaces/namespace-1565779574-6826/deployments/nginx" "uid":"7661b097-e41c-4318-899f-467eb31d394e"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T10:46:17Z" "lastUpdateTime":"2019-08-14T10:46:17Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T10:46:17Z" "lastUpdateTime":"2019-08-14T10:46:17Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 10:46:21.346] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 10:46:21.346] has:Error from server (Conflict)
W0814 10:46:23.423] I0814 10:46:23.422930   53084 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565779566-31949
W0814 10:46:25.817] E0814 10:46:25.815219   53084 replica_set.go:450] Sync "namespace-1565779574-6826/nginx-7dbc4d9f" failed with replicasets.apps "nginx-7dbc4d9f" not found
I0814 10:46:26.635] deployment.apps/nginx configured
W0814 10:46:26.735] I0814 10:46:26.639737   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779574-6826", Name:"nginx", UID:"391f1812-16d4-44f2-9016-3af9a9cd38f7", APIVersion:"apps/v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 10:46:26.736] I0814 10:46:26.644280   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"nginx-594f77b9f6", UID:"20254092-8a42-4d5a-93fd-8722bf04ed0d", APIVersion:"apps/v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-dwpd7
W0814 10:46:26.736] I0814 10:46:26.648907   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"nginx-594f77b9f6", UID:"20254092-8a42-4d5a-93fd-8722bf04ed0d", APIVersion:"apps/v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-772pj
W0814 10:46:26.737] I0814 10:46:26.651178   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779574-6826", Name:"nginx-594f77b9f6", UID:"20254092-8a42-4d5a-93fd-8722bf04ed0d", APIVersion:"apps/v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-dzx4k
I0814 10:46:26.837] Successful
... skipping 192 lines ...
I0814 10:46:34.155] +++ [0814 10:46:34] Creating namespace namespace-1565779594-25292
I0814 10:46:34.243] namespace/namespace-1565779594-25292 created
I0814 10:46:34.334] Context "test" modified.
I0814 10:46:34.341] +++ [0814 10:46:34] Testing kubectl get
I0814 10:46:34.446] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:34.549] (BSuccessful
I0814 10:46:34.550] message:Error from server (NotFound): pods "abc" not found
I0814 10:46:34.550] has:pods "abc" not found
I0814 10:46:34.654] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:34.752] (BSuccessful
I0814 10:46:34.753] message:Error from server (NotFound): pods "abc" not found
I0814 10:46:34.753] has:pods "abc" not found
I0814 10:46:34.852] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:34.948] (BSuccessful
I0814 10:46:34.949] message:{
I0814 10:46:34.949]     "apiVersion": "v1",
I0814 10:46:34.949]     "items": [],
... skipping 23 lines ...
I0814 10:46:35.345] has not:No resources found
I0814 10:46:35.445] Successful
I0814 10:46:35.445] message:NAME
I0814 10:46:35.445] has not:No resources found
I0814 10:46:35.550] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:35.664] (BSuccessful
I0814 10:46:35.665] message:error: the server doesn't have a resource type "foobar"
I0814 10:46:35.665] has not:No resources found
I0814 10:46:35.767] Successful
I0814 10:46:35.767] message:No resources found in namespace-1565779594-25292 namespace.
I0814 10:46:35.767] has:No resources found
I0814 10:46:35.870] Successful
I0814 10:46:35.870] message:
I0814 10:46:35.870] has not:No resources found
I0814 10:46:35.965] Successful
I0814 10:46:35.966] message:No resources found in namespace-1565779594-25292 namespace.
I0814 10:46:35.966] has:No resources found
I0814 10:46:36.066] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:36.163] (BSuccessful
I0814 10:46:36.165] message:Error from server (NotFound): pods "abc" not found
I0814 10:46:36.165] has:pods "abc" not found
I0814 10:46:36.166] FAIL!
I0814 10:46:36.166] message:Error from server (NotFound): pods "abc" not found
I0814 10:46:36.167] has not:List
I0814 10:46:36.167] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 10:46:36.301] Successful
I0814 10:46:36.302] message:I0814 10:46:36.242333   63670 loader.go:375] Config loaded from file:  /tmp/tmp.scJW2ks48B/.kube/config
I0814 10:46:36.302] I0814 10:46:36.244265   63670 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 10:46:36.303] I0814 10:46:36.270684   63670 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 10:46:41.932] Successful
I0814 10:46:41.932] message:NAME    DATA   AGE
I0814 10:46:41.932] one     0      0s
I0814 10:46:41.933] three   0      0s
I0814 10:46:41.933] two     0      0s
I0814 10:46:41.933] STATUS    REASON          MESSAGE
I0814 10:46:41.933] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 10:46:41.933] has not:watch is only supported on individual resources
I0814 10:46:43.024] Successful
I0814 10:46:43.025] message:STATUS    REASON          MESSAGE
I0814 10:46:43.025] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 10:46:43.026] has not:watch is only supported on individual resources
I0814 10:46:43.030] +++ [0814 10:46:43] Creating namespace namespace-1565779603-17305
I0814 10:46:43.103] namespace/namespace-1565779603-17305 created
I0814 10:46:43.169] Context "test" modified.
I0814 10:46:43.260] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:43.421] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 10:46:43.522] }
I0814 10:46:43.596] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 10:46:43.831] (B<no value>Successful
I0814 10:46:43.831] message:valid-pod:
I0814 10:46:43.831] has:valid-pod:
I0814 10:46:43.913] Successful
I0814 10:46:43.914] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 10:46:43.914] 	template was:
I0814 10:46:43.914] 		{.missing}
I0814 10:46:43.915] 	object given to jsonpath engine was:
I0814 10:46:43.917] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T10:46:43Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T10:46:43Z"}}, "name":"valid-pod", "namespace":"namespace-1565779603-17305", "resourceVersion":"688", "selfLink":"/api/v1/namespaces/namespace-1565779603-17305/pods/valid-pod", "uid":"9b0f549a-af8c-4f91-9d06-7a0dde5e4861"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 10:46:43.917] has:missing is not found
I0814 10:46:43.996] Successful
I0814 10:46:43.996] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 10:46:43.997] 	template was:
I0814 10:46:43.997] 		{{.missing}}
I0814 10:46:43.997] 	raw data was:
I0814 10:46:43.999] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T10:46:43Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T10:46:43Z"}],"name":"valid-pod","namespace":"namespace-1565779603-17305","resourceVersion":"688","selfLink":"/api/v1/namespaces/namespace-1565779603-17305/pods/valid-pod","uid":"9b0f549a-af8c-4f91-9d06-7a0dde5e4861"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 10:46:43.999] 	object given to template engine was:
I0814 10:46:44.000] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T10:46:43Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T10:46:43Z]] name:valid-pod namespace:namespace-1565779603-17305 resourceVersion:688 selfLink:/api/v1/namespaces/namespace-1565779603-17305/pods/valid-pod uid:9b0f549a-af8c-4f91-9d06-7a0dde5e4861] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 10:46:44.000] has:map has no entry for key "missing"
W0814 10:46:44.101] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 10:46:45.080] Successful
I0814 10:46:45.080] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 10:46:45.080] valid-pod   0/1     Pending   0          1s
I0814 10:46:45.081] STATUS      REASON          MESSAGE
I0814 10:46:45.081] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 10:46:45.081] has:STATUS
I0814 10:46:45.081] Successful
I0814 10:46:45.082] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 10:46:45.082] valid-pod   0/1     Pending   0          1s
I0814 10:46:45.082] STATUS      REASON          MESSAGE
I0814 10:46:45.082] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 10:46:45.083] has:valid-pod
I0814 10:46:46.163] Successful
I0814 10:46:46.164] message:pod/valid-pod
I0814 10:46:46.164] has not:STATUS
I0814 10:46:46.165] Successful
I0814 10:46:46.165] message:pod/valid-pod
... skipping 144 lines ...
I0814 10:46:47.279] status:
I0814 10:46:47.279]   phase: Pending
I0814 10:46:47.279]   qosClass: Guaranteed
I0814 10:46:47.279] ---
I0814 10:46:47.279] has:name: valid-pod
I0814 10:46:47.339] Successful
I0814 10:46:47.340] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 10:46:47.340] has:"invalid-pod" not found
I0814 10:46:47.421] pod "valid-pod" deleted
I0814 10:46:47.514] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:46:47.672] (Bpod/redis-master created
I0814 10:46:47.677] pod/valid-pod created
I0814 10:46:47.768] Successful
... skipping 35 lines ...
I0814 10:46:48.872] +++ command: run_kubectl_exec_pod_tests
I0814 10:46:48.883] +++ [0814 10:46:48] Creating namespace namespace-1565779608-27185
I0814 10:46:48.955] namespace/namespace-1565779608-27185 created
I0814 10:46:49.023] Context "test" modified.
I0814 10:46:49.029] +++ [0814 10:46:49] Testing kubectl exec POD COMMAND
I0814 10:46:49.111] Successful
I0814 10:46:49.111] message:Error from server (NotFound): pods "abc" not found
I0814 10:46:49.111] has:pods "abc" not found
I0814 10:46:49.263] pod/test-pod created
I0814 10:46:49.359] Successful
I0814 10:46:49.359] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 10:46:49.360] has not:pods "test-pod" not found
I0814 10:46:49.360] Successful
I0814 10:46:49.361] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 10:46:49.361] has not:pod or type/name must be specified
I0814 10:46:49.438] pod "test-pod" deleted
I0814 10:46:49.457] +++ exit code: 0
I0814 10:46:49.488] Recording: run_kubectl_exec_resource_name_tests
I0814 10:46:49.489] Running command: run_kubectl_exec_resource_name_tests
I0814 10:46:49.509] 
... skipping 2 lines ...
I0814 10:46:49.516] +++ command: run_kubectl_exec_resource_name_tests
I0814 10:46:49.528] +++ [0814 10:46:49] Creating namespace namespace-1565779609-24343
I0814 10:46:49.599] namespace/namespace-1565779609-24343 created
I0814 10:46:49.666] Context "test" modified.
I0814 10:46:49.673] +++ [0814 10:46:49] Testing kubectl exec TYPE/NAME COMMAND
I0814 10:46:49.768] Successful
I0814 10:46:49.769] message:error: the server doesn't have a resource type "foo"
I0814 10:46:49.770] has:error:
I0814 10:46:49.855] Successful
I0814 10:46:49.855] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 10:46:49.855] has:"bar" not found
I0814 10:46:50.014] pod/test-pod created
I0814 10:46:50.175] replicaset.apps/frontend created
W0814 10:46:50.276] I0814 10:46:50.179454   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779609-24343", Name:"frontend", UID:"5848dd41-e2f5-4e7a-90ca-7eab53aac59e", APIVersion:"apps/v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kq9bf
W0814 10:46:50.277] I0814 10:46:50.190067   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779609-24343", Name:"frontend", UID:"5848dd41-e2f5-4e7a-90ca-7eab53aac59e", APIVersion:"apps/v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rxj5c
W0814 10:46:50.277] I0814 10:46:50.190618   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779609-24343", Name:"frontend", UID:"5848dd41-e2f5-4e7a-90ca-7eab53aac59e", APIVersion:"apps/v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qmfxz
I0814 10:46:50.378] configmap/test-set-env-config created
I0814 10:46:50.441] Successful
I0814 10:46:50.442] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 10:46:50.442] has:not implemented
I0814 10:46:50.529] Successful
I0814 10:46:50.530] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 10:46:50.530] has not:not found
I0814 10:46:50.531] Successful
I0814 10:46:50.531] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 10:46:50.531] has not:pod or type/name must be specified
I0814 10:46:50.630] Successful
I0814 10:46:50.631] message:Error from server (BadRequest): pod frontend-kq9bf does not have a host assigned
I0814 10:46:50.631] has not:not found
I0814 10:46:50.633] Successful
I0814 10:46:50.633] message:Error from server (BadRequest): pod frontend-kq9bf does not have a host assigned
I0814 10:46:50.633] has not:pod or type/name must be specified
I0814 10:46:50.708] pod "test-pod" deleted
I0814 10:46:50.787] replicaset.apps "frontend" deleted
I0814 10:46:50.870] configmap "test-set-env-config" deleted
I0814 10:46:50.887] +++ exit code: 0
I0814 10:46:50.919] Recording: run_create_secret_tests
I0814 10:46:50.919] Running command: run_create_secret_tests
I0814 10:46:50.937] 
I0814 10:46:50.942] +++ Running case: test-cmd.run_create_secret_tests 
I0814 10:46:50.945] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:46:50.948] +++ command: run_create_secret_tests
I0814 10:46:51.040] Successful
I0814 10:46:51.040] message:Error from server (NotFound): secrets "mysecret" not found
I0814 10:46:51.041] has:secrets "mysecret" not found
I0814 10:46:51.194] Successful
I0814 10:46:51.194] message:Error from server (NotFound): secrets "mysecret" not found
I0814 10:46:51.194] has:secrets "mysecret" not found
I0814 10:46:51.195] Successful
I0814 10:46:51.196] message:user-specified
I0814 10:46:51.196] has:user-specified
I0814 10:46:51.270] Successful
I0814 10:46:51.346] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7caaab1b-9a9a-4f2e-a13b-2cfb52224f6f","resourceVersion":"762","creationTimestamp":"2019-08-14T10:46:51Z"}}
... skipping 2 lines ...
I0814 10:46:51.506] has:uid
I0814 10:46:51.580] Successful
I0814 10:46:51.581] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"7caaab1b-9a9a-4f2e-a13b-2cfb52224f6f","resourceVersion":"763","creationTimestamp":"2019-08-14T10:46:51Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T10:46:51Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 10:46:51.581] has:config1
I0814 10:46:51.650] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"7caaab1b-9a9a-4f2e-a13b-2cfb52224f6f"}}
I0814 10:46:51.737] Successful
I0814 10:46:51.738] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 10:46:51.738] has:configmaps "tester-update-cm" not found
I0814 10:46:51.748] +++ exit code: 0
I0814 10:46:51.777] Recording: run_kubectl_create_kustomization_directory_tests
I0814 10:46:51.778] Running command: run_kubectl_create_kustomization_directory_tests
I0814 10:46:51.797] 
I0814 10:46:51.799] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
W0814 10:46:54.409] I0814 10:46:52.259345   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779609-24343", Name:"test-the-deployment-55cf944b", UID:"0666b067-10a9-452a-b6c0-a0c936fb8f32", APIVersion:"apps/v1", ResourceVersion:"772", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-nxtsl
W0814 10:46:54.410] I0814 10:46:52.259832   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779609-24343", Name:"test-the-deployment-55cf944b", UID:"0666b067-10a9-452a-b6c0-a0c936fb8f32", APIVersion:"apps/v1", ResourceVersion:"772", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-55cf944b-wpdnk
I0814 10:46:55.389] Successful
I0814 10:46:55.389] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 10:46:55.389] valid-pod   0/1     Pending   0          0s
I0814 10:46:55.389] STATUS      REASON          MESSAGE
I0814 10:46:55.389] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 10:46:55.389] has:Timeout exceeded while reading body
I0814 10:46:55.472] Successful
I0814 10:46:55.472] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 10:46:55.472] valid-pod   0/1     Pending   0          1s
I0814 10:46:55.473] has:valid-pod
I0814 10:46:55.541] Successful
I0814 10:46:55.541] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 10:46:55.542] has:Invalid timeout value
I0814 10:46:55.618] pod "valid-pod" deleted
I0814 10:46:55.636] +++ exit code: 0
I0814 10:46:55.668] Recording: run_crd_tests
I0814 10:46:55.668] Running command: run_crd_tests
I0814 10:46:55.688] 
... skipping 245 lines ...
I0814 10:47:01.897] foo.company.com/test patched
I0814 10:47:02.029] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 10:47:02.130] (Bfoo.company.com/test patched
I0814 10:47:02.223] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 10:47:02.310] (Bfoo.company.com/test patched
I0814 10:47:02.402] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 10:47:02.555] (B+++ [0814 10:47:02] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 10:47:02.618] {
I0814 10:47:02.619]     "apiVersion": "company.com/v1",
I0814 10:47:02.620]     "kind": "Foo",
I0814 10:47:02.620]     "metadata": {
I0814 10:47:02.621]         "annotations": {
I0814 10:47:02.621]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 346 lines ...
I0814 10:47:29.401] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 10:47:29.506] (Bnamespace "non-native-resources" deleted
I0814 10:47:34.739] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 10:47:34.900] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0814 10:47:34.998] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0814 10:47:35.099] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
W0814 10:47:35.200] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 10:47:35.301] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 10:47:35.301] +++ exit code: 0
I0814 10:47:35.303] Recording: run_cmd_with_img_tests
I0814 10:47:35.303] Running command: run_cmd_with_img_tests
I0814 10:47:35.323] 
I0814 10:47:35.325] +++ Running case: test-cmd.run_cmd_with_img_tests 
... skipping 8 lines ...
I0814 10:47:35.591] has:deployment.apps/test1 created
W0814 10:47:35.692] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 10:47:35.693] I0814 10:47:35.582726   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779655-11671", Name:"test1", UID:"27984506-94f7-4dc5-bfc9-35788f56da4a", APIVersion:"apps/v1", ResourceVersion:"920", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 10:47:35.693] I0814 10:47:35.588029   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-11671", Name:"test1-9797f89d8", UID:"d983a886-8ddd-4454-97fd-35daeac3ee83", APIVersion:"apps/v1", ResourceVersion:"921", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-7hvkc
I0814 10:47:35.794] deployment.apps "test1" deleted
I0814 10:47:35.794] Successful
I0814 10:47:35.794] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 10:47:35.795] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 10:47:35.803] +++ exit code: 0
I0814 10:47:35.843] +++ [0814 10:47:35] Testing recursive resources
I0814 10:47:35.849] +++ [0814 10:47:35] Creating namespace namespace-1565779655-9836
I0814 10:47:35.928] namespace/namespace-1565779655-9836 created
I0814 10:47:36.002] Context "test" modified.
I0814 10:47:36.095] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:36.420] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:36.422] (BSuccessful
I0814 10:47:36.422] message:pod/busybox0 created
I0814 10:47:36.422] pod/busybox1 created
I0814 10:47:36.422] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 10:47:36.422] has:error validating data: kind not set
I0814 10:47:36.510] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:36.689] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 10:47:36.692] (BSuccessful
I0814 10:47:36.692] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:36.693] has:Object 'Kind' is missing
I0814 10:47:36.784] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:37.076] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 10:47:37.079] (BSuccessful
I0814 10:47:37.079] message:pod/busybox0 replaced
I0814 10:47:37.080] pod/busybox1 replaced
I0814 10:47:37.080] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 10:47:37.081] has:error validating data: kind not set
I0814 10:47:37.176] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:37.304] (BSuccessful
I0814 10:47:37.304] message:Name:         busybox0
I0814 10:47:37.304] Namespace:    namespace-1565779655-9836
I0814 10:47:37.304] Priority:     0
I0814 10:47:37.305] Node:         <none>
... skipping 159 lines ...
I0814 10:47:37.321] has:Object 'Kind' is missing
I0814 10:47:37.407] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:37.600] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 10:47:37.602] (BSuccessful
I0814 10:47:37.603] message:pod/busybox0 annotated
I0814 10:47:37.603] pod/busybox1 annotated
I0814 10:47:37.603] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:37.604] has:Object 'Kind' is missing
I0814 10:47:37.696] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:38.005] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 10:47:38.007] (BSuccessful
I0814 10:47:38.008] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 10:47:38.008] pod/busybox0 configured
I0814 10:47:38.008] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 10:47:38.008] pod/busybox1 configured
I0814 10:47:38.008] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 10:47:38.008] has:error validating data: kind not set
I0814 10:47:38.098] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:38.259] (Bdeployment.apps/nginx created
W0814 10:47:38.360] W0814 10:47:35.911649   49631 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 10:47:38.360] E0814 10:47:35.914028   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.361] W0814 10:47:36.008435   49631 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 10:47:38.361] E0814 10:47:36.010549   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.361] W0814 10:47:36.113618   49631 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 10:47:38.361] E0814 10:47:36.115565   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.362] W0814 10:47:36.216998   49631 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 10:47:38.362] E0814 10:47:36.218707   53084 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.362] E0814 10:47:36.915935   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.363] E0814 10:47:37.011881   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.363] E0814 10:47:37.117049   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.363] E0814 10:47:37.220276   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.363] E0814 10:47:37.917726   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.363] E0814 10:47:38.014194   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.364] E0814 10:47:38.118759   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.364] E0814 10:47:38.222083   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:38.364] I0814 10:47:38.264968   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779655-9836", Name:"nginx", UID:"c113cf3d-ebad-4989-9b0f-d8e1f0b739d1", APIVersion:"apps/v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 10:47:38.365] I0814 10:47:38.269902   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx-bbbbb95b5", UID:"b8fb3b01-d2eb-4609-832c-e55010f144e4", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-4qrw5
W0814 10:47:38.365] I0814 10:47:38.273800   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx-bbbbb95b5", UID:"b8fb3b01-d2eb-4609-832c-e55010f144e4", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-9f968
W0814 10:47:38.365] I0814 10:47:38.274133   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx-bbbbb95b5", UID:"b8fb3b01-d2eb-4609-832c-e55010f144e4", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-l6h6n
I0814 10:47:38.466] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 10:47:38.469] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 44 lines ...
I0814 10:47:38.726] deployment.apps "nginx" deleted
I0814 10:47:38.823] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:38.993] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:38.996] (BSuccessful
I0814 10:47:38.997] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 10:47:38.997] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 10:47:38.997] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:38.997] has:Object 'Kind' is missing
I0814 10:47:39.089] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:39.174] (BSuccessful
I0814 10:47:39.174] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.175] has:busybox0:busybox1:
I0814 10:47:39.176] Successful
I0814 10:47:39.176] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.176] has:Object 'Kind' is missing
I0814 10:47:39.267] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:39.361] (Bpod/busybox0 labeled
I0814 10:47:39.362] pod/busybox1 labeled
I0814 10:47:39.363] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.456] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 10:47:39.458] (BSuccessful
I0814 10:47:39.458] message:pod/busybox0 labeled
I0814 10:47:39.458] pod/busybox1 labeled
I0814 10:47:39.459] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.459] has:Object 'Kind' is missing
I0814 10:47:39.547] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:39.634] (Bpod/busybox0 patched
I0814 10:47:39.634] pod/busybox1 patched
I0814 10:47:39.634] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.722] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 10:47:39.724] (BSuccessful
I0814 10:47:39.725] message:pod/busybox0 patched
I0814 10:47:39.725] pod/busybox1 patched
I0814 10:47:39.725] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.725] has:Object 'Kind' is missing
I0814 10:47:39.811] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:39.985] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:39.987] (BSuccessful
I0814 10:47:39.987] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 10:47:39.987] pod "busybox0" force deleted
I0814 10:47:39.987] pod "busybox1" force deleted
I0814 10:47:39.988] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 10:47:39.988] has:Object 'Kind' is missing
I0814 10:47:40.070] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:40.236] (Breplicationcontroller/busybox0 created
I0814 10:47:40.241] replicationcontroller/busybox1 created
I0814 10:47:40.339] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:40.427] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:40.514] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 10:47:40.598] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 10:47:40.773] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 10:47:40.859] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 10:47:40.861] (BSuccessful
I0814 10:47:40.862] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 10:47:40.862] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 10:47:40.862] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:40.862] has:Object 'Kind' is missing
I0814 10:47:40.942] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 10:47:41.025] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 10:47:41.117] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:41.202] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 10:47:41.290] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 10:47:41.474] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 10:47:41.563] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 10:47:41.565] (BSuccessful
I0814 10:47:41.565] message:service/busybox0 exposed
I0814 10:47:41.565] service/busybox1 exposed
I0814 10:47:41.566] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:41.566] has:Object 'Kind' is missing
I0814 10:47:41.658] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:41.748] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 10:47:41.835] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 10:47:42.039] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 10:47:42.128] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 10:47:42.129] (BSuccessful
I0814 10:47:42.130] message:replicationcontroller/busybox0 scaled
I0814 10:47:42.130] replicationcontroller/busybox1 scaled
I0814 10:47:42.131] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:42.131] has:Object 'Kind' is missing
I0814 10:47:42.216] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:42.397] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:42.399] (BSuccessful
I0814 10:47:42.400] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 10:47:42.400] replicationcontroller "busybox0" force deleted
I0814 10:47:42.400] replicationcontroller "busybox1" force deleted
I0814 10:47:42.400] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:42.400] has:Object 'Kind' is missing
I0814 10:47:42.489] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:42.654] (Bdeployment.apps/nginx1-deployment created
I0814 10:47:42.659] deployment.apps/nginx0-deployment created
W0814 10:47:42.760] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 10:47:42.760] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 10:47:42.760] E0814 10:47:38.919351   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.761] E0814 10:47:39.016462   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.761] E0814 10:47:39.120418   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.761] E0814 10:47:39.223818   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.761] E0814 10:47:39.921333   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.761] E0814 10:47:40.018137   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.762] E0814 10:47:40.121978   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.762] E0814 10:47:40.225597   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.762] I0814 10:47:40.240976   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779655-9836", Name:"busybox0", UID:"d95fec10-d52a-402a-924b-162e7ac6b2cf", APIVersion:"v1", ResourceVersion:"976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-njzfr
W0814 10:47:42.762] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 10:47:42.763] I0814 10:47:40.245222   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779655-9836", Name:"busybox1", UID:"4b3b6a83-7640-4f49-96a7-3e19683bed57", APIVersion:"v1", ResourceVersion:"978", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6qj5d
W0814 10:47:42.763] E0814 10:47:40.922850   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.763] E0814 10:47:41.019494   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.763] E0814 10:47:41.123439   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.763] E0814 10:47:41.227511   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.764] E0814 10:47:41.924051   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.764] I0814 10:47:41.936254   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779655-9836", Name:"busybox0", UID:"d95fec10-d52a-402a-924b-162e7ac6b2cf", APIVersion:"v1", ResourceVersion:"997", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rmbbb
W0814 10:47:42.764] I0814 10:47:41.947680   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779655-9836", Name:"busybox1", UID:"4b3b6a83-7640-4f49-96a7-3e19683bed57", APIVersion:"v1", ResourceVersion:"1001", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-7fknd
W0814 10:47:42.764] E0814 10:47:42.020903   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.765] E0814 10:47:42.124794   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.765] E0814 10:47:42.229161   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:42.765] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 10:47:42.765] I0814 10:47:42.659272   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779655-9836", Name:"nginx1-deployment", UID:"52c5ecd2-ef09-45ad-b4ac-8feaea756ff3", APIVersion:"apps/v1", ResourceVersion:"1018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 10:47:42.766] I0814 10:47:42.664419   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx1-deployment-84f7f49fb7", UID:"25f2634b-484c-4796-ad1b-3b917e91e4e8", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-mxgmc
W0814 10:47:42.766] I0814 10:47:42.664634   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565779655-9836", Name:"nginx0-deployment", UID:"41af8a77-04e1-40bd-b573-cc134dccb1eb", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 10:47:42.766] I0814 10:47:42.669839   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx1-deployment-84f7f49fb7", UID:"25f2634b-484c-4796-ad1b-3b917e91e4e8", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-gr2sr
W0814 10:47:42.767] I0814 10:47:42.671082   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx0-deployment-57475bf54d", UID:"4dcb8a93-51d7-41f6-9a2e-abf16af4149b", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-566n8
W0814 10:47:42.767] I0814 10:47:42.675058   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565779655-9836", Name:"nginx0-deployment-57475bf54d", UID:"4dcb8a93-51d7-41f6-9a2e-abf16af4149b", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-bktf8
I0814 10:47:42.868] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 10:47:42.868] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 10:47:43.055] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 10:47:43.057] (BSuccessful
I0814 10:47:43.057] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 10:47:43.058] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 10:47:43.058] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 10:47:43.058] has:Object 'Kind' is missing
I0814 10:47:43.155] deployment.apps/nginx1-deployment paused
I0814 10:47:43.161] deployment.apps/nginx0-deployment paused
W0814 10:47:43.261] E0814 10:47:42.925585   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:43.262] E0814 10:47:43.022930   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:43.263] E0814 10:47:43.126348   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:43.264] E0814 10:47:43.231201   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:47:43.364] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 10:47:43.365] (BSuccessful
I0814 10:47:43.365] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 10:47:43.365] has:Object 'Kind' is missing
I0814 10:47:43.376] deployment.apps/nginx1-deployment resumed
I0814 10:47:43.384] deployment.apps/nginx0-deployment resumed
... skipping 7 lines ...
I0814 10:47:43.594] 1         <none>
I0814 10:47:43.594] 
I0814 10:47:43.594] deployment.apps/nginx0-deployment 
I0814 10:47:43.594] REVISION  CHANGE-CAUSE
I0814 10:47:43.594] 1         <none>
I0814 10:47:43.594] 
I0814 10:47:43.595] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 10:47:43.595] has:nginx0-deployment
I0814 10:47:43.596] Successful
I0814 10:47:43.596] message:deployment.apps/nginx1-deployment 
I0814 10:47:43.596] REVISION  CHANGE-CAUSE
I0814 10:47:43.596] 1         <none>
I0814 10:47:43.596] 
I0814 10:47:43.596] deployment.apps/nginx0-deployment 
I0814 10:47:43.596] REVISION  CHANGE-CAUSE
I0814 10:47:43.596] 1         <none>
I0814 10:47:43.596] 
I0814 10:47:43.597] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 10:47:43.597] has:nginx1-deployment
I0814 10:47:43.597] Successful
I0814 10:47:43.598] message:deployment.apps/nginx1-deployment 
I0814 10:47:43.598] REVISION  CHANGE-CAUSE
I0814 10:47:43.598] 1         <none>
I0814 10:47:43.598] 
I0814 10:47:43.598] deployment.apps/nginx0-deployment 
I0814 10:47:43.598] REVISION  CHANGE-CAUSE
I0814 10:47:43.599] 1         <none>
I0814 10:47:43.599] 
I0814 10:47:43.599] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 10:47:43.599] has:Object 'Kind' is missing
I0814 10:47:43.673] deployment.apps "nginx1-deployment" force deleted
I0814 10:47:43.678] deployment.apps "nginx0-deployment" force deleted
W0814 10:47:43.779] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 10:47:43.780] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 10:47:43.928] E0814 10:47:43.927518   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:44.025] E0814 10:47:44.024757   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:44.128] E0814 10:47:44.128113   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:44.234] E0814 10:47:44.233392   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:47:44.769] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:44.925] (Breplicationcontroller/busybox0 created
I0814 10:47:44.929] replicationcontroller/busybox1 created
I0814 10:47:45.028] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 10:47:45.115] (BSuccessful
I0814 10:47:45.116] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0814 10:47:45.117] message:no rollbacker has been implemented for "ReplicationController"
I0814 10:47:45.117] no rollbacker has been implemented for "ReplicationController"
I0814 10:47:45.118] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.118] has:Object 'Kind' is missing
I0814 10:47:45.208] Successful
I0814 10:47:45.209] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.209] error: replicationcontrollers "busybox0" pausing is not supported
I0814 10:47:45.209] error: replicationcontrollers "busybox1" pausing is not supported
I0814 10:47:45.209] has:Object 'Kind' is missing
I0814 10:47:45.210] Successful
I0814 10:47:45.211] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.211] error: replicationcontrollers "busybox0" pausing is not supported
I0814 10:47:45.211] error: replicationcontrollers "busybox1" pausing is not supported
I0814 10:47:45.211] has:replicationcontrollers "busybox0" pausing is not supported
I0814 10:47:45.213] Successful
I0814 10:47:45.213] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.214] error: replicationcontrollers "busybox0" pausing is not supported
I0814 10:47:45.214] error: replicationcontrollers "busybox1" pausing is not supported
I0814 10:47:45.214] has:replicationcontrollers "busybox1" pausing is not supported
I0814 10:47:45.306] Successful
I0814 10:47:45.307] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.307] error: replicationcontrollers "busybox0" resuming is not supported
I0814 10:47:45.307] error: replicationcontrollers "busybox1" resuming is not supported
I0814 10:47:45.308] has:Object 'Kind' is missing
I0814 10:47:45.308] Successful
I0814 10:47:45.309] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.309] error: replicationcontrollers "busybox0" resuming is not supported
I0814 10:47:45.309] error: replicationcontrollers "busybox1" resuming is not supported
I0814 10:47:45.309] has:replicationcontrollers "busybox0" resuming is not supported
I0814 10:47:45.311] Successful
I0814 10:47:45.312] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 10:47:45.312] error: replicationcontrollers "busybox0" resuming is not supported
I0814 10:47:45.312] error: replicationcontrollers "busybox1" resuming is not supported
I0814 10:47:45.312] has:replicationcontrollers "busybox0" resuming is not supported
I0814 10:47:45.390] replicationcontroller "busybox0" force deleted
I0814 10:47:45.395] replicationcontroller "busybox1" force deleted
W0814 10:47:45.496] E0814 10:47:44.929173   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:45.496] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 10:47:45.497] I0814 10:47:44.929906   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779655-9836", Name:"busybox0", UID:"554cada0-1da0-453e-aad7-81077c2e45ab", APIVersion:"v1", ResourceVersion:"1068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qw85f
W0814 10:47:45.497] I0814 10:47:44.934520   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779655-9836", Name:"busybox1", UID:"eb9a3486-946c-411c-b1ed-1311350e87a9", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-tm85l
W0814 10:47:45.498] E0814 10:47:45.026095   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:45.498] E0814 10:47:45.129661   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:45.498] E0814 10:47:45.234908   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:45.498] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 10:47:45.499] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0814 10:47:45.931] E0814 10:47:45.930729   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:46.028] E0814 10:47:46.027749   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:46.132] E0814 10:47:46.131270   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:46.237] E0814 10:47:46.236384   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:47:46.402] Recording: run_namespace_tests
I0814 10:47:46.402] Running command: run_namespace_tests
I0814 10:47:46.421] 
I0814 10:47:46.423] +++ Running case: test-cmd.run_namespace_tests 
I0814 10:47:46.427] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:47:46.428] +++ command: run_namespace_tests
I0814 10:47:46.436] +++ [0814 10:47:46] Testing kubectl(v1:namespaces)
I0814 10:47:46.511] namespace/my-namespace created
I0814 10:47:46.604] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 10:47:46.675] (Bnamespace "my-namespace" deleted
W0814 10:47:46.933] E0814 10:47:46.932276   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:47.030] E0814 10:47:47.029209   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:47.133] E0814 10:47:47.132747   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:47.238] E0814 10:47:47.237831   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:47.934] E0814 10:47:47.933989   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:48.031] E0814 10:47:48.030760   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:48.135] E0814 10:47:48.134511   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:48.240] E0814 10:47:48.239341   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:48.936] E0814 10:47:48.935525   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:49.033] E0814 10:47:49.032282   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:49.136] E0814 10:47:49.136051   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:49.241] E0814 10:47:49.240770   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:49.937] E0814 10:47:49.937118   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:50.034] E0814 10:47:50.033960   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:50.138] E0814 10:47:50.137782   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:50.243] E0814 10:47:50.242278   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:50.939] E0814 10:47:50.938627   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:51.036] E0814 10:47:51.035512   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:51.139] E0814 10:47:51.139264   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:51.244] E0814 10:47:51.243763   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:47:51.770] namespace/my-namespace condition met
I0814 10:47:51.855] Successful
I0814 10:47:51.856] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 10:47:51.856] has: not found
I0814 10:47:51.930] namespace/my-namespace created
I0814 10:47:52.022] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 10:47:52.240] (BSuccessful
I0814 10:47:52.241] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 10:47:52.241] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 10:47:52.249] namespace "namespace-1565779612-23367" deleted
I0814 10:47:52.249] namespace "namespace-1565779613-16725" deleted
I0814 10:47:52.249] namespace "namespace-1565779615-31388" deleted
I0814 10:47:52.250] namespace "namespace-1565779617-17211" deleted
I0814 10:47:52.250] namespace "namespace-1565779655-11671" deleted
I0814 10:47:52.250] namespace "namespace-1565779655-9836" deleted
I0814 10:47:52.251] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 10:47:52.251] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 10:47:52.251] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 10:47:52.251] has:warning: deleting cluster-scoped resources
I0814 10:47:52.252] Successful
I0814 10:47:52.252] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 10:47:52.252] namespace "kube-node-lease" deleted
I0814 10:47:52.252] namespace "my-namespace" deleted
I0814 10:47:52.253] namespace "namespace-1565779522-25027" deleted
... skipping 27 lines ...
I0814 10:47:52.260] namespace "namespace-1565779612-23367" deleted
I0814 10:47:52.260] namespace "namespace-1565779613-16725" deleted
I0814 10:47:52.260] namespace "namespace-1565779615-31388" deleted
I0814 10:47:52.260] namespace "namespace-1565779617-17211" deleted
I0814 10:47:52.261] namespace "namespace-1565779655-11671" deleted
I0814 10:47:52.261] namespace "namespace-1565779655-9836" deleted
I0814 10:47:52.261] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 10:47:52.261] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 10:47:52.262] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 10:47:52.262] has:namespace "my-namespace" deleted
I0814 10:47:52.346] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 10:47:52.417] (Bnamespace/other created
I0814 10:47:52.508] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 10:47:52.598] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:52.754] (Bpod/valid-pod created
I0814 10:47:52.853] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 10:47:52.942] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 10:47:53.020] (BSuccessful
I0814 10:47:53.021] message:error: a resource cannot be retrieved by name across all namespaces
I0814 10:47:53.021] has:a resource cannot be retrieved by name across all namespaces
I0814 10:47:53.110] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 10:47:53.190] (Bpod "valid-pod" force deleted
I0814 10:47:53.285] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:47:53.360] (Bnamespace "other" deleted
W0814 10:47:53.461] E0814 10:47:51.939917   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.462] E0814 10:47:52.037399   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.463] E0814 10:47:52.140706   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.463] E0814 10:47:52.245433   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.463] I0814 10:47:52.909371   53084 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 10:47:53.464] E0814 10:47:52.941199   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.464] I0814 10:47:53.009836   53084 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 10:47:53.464] E0814 10:47:53.039034   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.465] E0814 10:47:53.142171   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.465] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 10:47:53.465] E0814 10:47:53.246717   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:53.466] I0814 10:47:53.342175   53084 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 10:47:53.466] I0814 10:47:53.442592   53084 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 10:47:53.943] E0814 10:47:53.942826   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:54.041] E0814 10:47:54.040499   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:54.144] E0814 10:47:54.143797   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:54.249] E0814 10:47:54.248344   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:54.945] E0814 10:47:54.944476   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:55.043] E0814 10:47:55.042691   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:55.146] E0814 10:47:55.145375   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:55.250] E0814 10:47:55.249913   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:55.677] I0814 10:47:55.676612   53084 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565779655-9836
W0814 10:47:55.681] I0814 10:47:55.680868   53084 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565779655-9836
W0814 10:47:55.946] E0814 10:47:55.946001   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:56.044] E0814 10:47:56.044181   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:56.147] E0814 10:47:56.146905   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:56.252] E0814 10:47:56.251597   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:56.948] E0814 10:47:56.947616   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:57.046] E0814 10:47:57.045790   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:57.149] E0814 10:47:57.149306   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:57.255] E0814 10:47:57.254497   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:57.949] E0814 10:47:57.949120   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:58.048] E0814 10:47:58.047320   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:58.151] E0814 10:47:58.150888   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:47:58.257] E0814 10:47:58.256126   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:47:58.470] +++ exit code: 0
I0814 10:47:58.505] Recording: run_secrets_test
I0814 10:47:58.505] Running command: run_secrets_test
I0814 10:47:58.524] 
I0814 10:47:58.526] +++ Running case: test-cmd.run_secrets_test 
I0814 10:47:58.529] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I0814 10:48:00.396] (Bsecret "test-secret" deleted
I0814 10:48:00.473] secret/test-secret created
I0814 10:48:00.560] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 10:48:00.648] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 10:48:00.722] (Bsecret "test-secret" deleted
W0814 10:48:00.822] I0814 10:47:58.760715   70092 loader.go:375] Config loaded from file:  /tmp/tmp.scJW2ks48B/.kube/config
W0814 10:48:00.823] E0814 10:47:58.950507   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.823] E0814 10:47:59.048641   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.823] E0814 10:47:59.152226   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.823] E0814 10:47:59.257462   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.824] E0814 10:47:59.952049   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.824] E0814 10:48:00.049862   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.824] E0814 10:48:00.153218   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:00.824] E0814 10:48:00.258749   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:00.925] secret/secret-string-data created
I0814 10:48:00.956] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 10:48:01.036] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 10:48:01.117] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 10:48:01.188] (Bsecret "secret-string-data" deleted
I0814 10:48:01.279] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:48:01.426] (Bsecret "test-secret" deleted
I0814 10:48:01.501] namespace "test-secrets" deleted
W0814 10:48:01.602] E0814 10:48:00.953498   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:01.602] E0814 10:48:01.051156   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:01.603] E0814 10:48:01.154629   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:01.603] E0814 10:48:01.260086   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:01.955] E0814 10:48:01.954864   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:02.053] E0814 10:48:02.053027   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:02.156] E0814 10:48:02.156067   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:02.262] E0814 10:48:02.261456   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:02.957] E0814 10:48:02.956345   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:03.055] E0814 10:48:03.054391   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:03.158] E0814 10:48:03.157622   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:03.263] E0814 10:48:03.262912   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:03.958] E0814 10:48:03.957666   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:04.056] E0814 10:48:04.055815   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:04.159] E0814 10:48:04.159068   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:04.264] E0814 10:48:04.264122   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:04.959] E0814 10:48:04.959125   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:05.057] E0814 10:48:05.057175   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:05.161] E0814 10:48:05.160640   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:05.266] E0814 10:48:05.265495   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:05.961] E0814 10:48:05.961085   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:06.059] E0814 10:48:06.058523   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:06.162] E0814 10:48:06.161967   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:06.267] E0814 10:48:06.267076   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:06.593] +++ exit code: 0
I0814 10:48:06.625] Recording: run_configmap_tests
I0814 10:48:06.625] Running command: run_configmap_tests
I0814 10:48:06.643] 
I0814 10:48:06.645] +++ Running case: test-cmd.run_configmap_tests 
I0814 10:48:06.647] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:48:06.649] +++ command: run_configmap_tests
I0814 10:48:06.659] +++ [0814 10:48:06] Creating namespace namespace-1565779686-25441
I0814 10:48:06.730] namespace/namespace-1565779686-25441 created
I0814 10:48:06.807] Context "test" modified.
I0814 10:48:06.813] +++ [0814 10:48:06] Testing configmaps
W0814 10:48:06.963] E0814 10:48:06.962469   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:07.060] E0814 10:48:07.059852   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:07.161] configmap/test-configmap created
I0814 10:48:07.161] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 10:48:07.161] (Bconfigmap "test-configmap" deleted
I0814 10:48:07.248] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 10:48:07.317] (Bnamespace/test-configmaps created
I0814 10:48:07.402] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0814 10:48:07.700] configmap/test-binary-configmap created
I0814 10:48:07.785] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 10:48:07.866] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 10:48:08.090] (Bconfigmap "test-configmap" deleted
I0814 10:48:08.166] configmap "test-binary-configmap" deleted
I0814 10:48:08.243] namespace "test-configmaps" deleted
W0814 10:48:08.343] E0814 10:48:07.163162   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:08.344] E0814 10:48:07.268931   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:08.344] E0814 10:48:07.963991   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:08.344] E0814 10:48:08.061277   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:08.345] E0814 10:48:08.164393   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:08.345] E0814 10:48:08.270845   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:08.966] E0814 10:48:08.965555   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:09.065] E0814 10:48:09.063986   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:09.166] E0814 10:48:09.166025   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:09.273] E0814 10:48:09.272489   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:09.967] E0814 10:48:09.967181   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:10.066] E0814 10:48:10.065440   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:10.168] E0814 10:48:10.167716   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:10.274] E0814 10:48:10.273955   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:10.969] E0814 10:48:10.968672   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:11.067] E0814 10:48:11.066797   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:11.170] E0814 10:48:11.169396   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:11.276] E0814 10:48:11.275502   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:11.970] E0814 10:48:11.970030   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:12.069] E0814 10:48:12.068327   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:12.171] E0814 10:48:12.170999   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:12.277] E0814 10:48:12.277131   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:12.972] E0814 10:48:12.971676   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:13.070] E0814 10:48:13.069865   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:13.173] E0814 10:48:13.172508   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:13.278] E0814 10:48:13.278025   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:13.379] +++ exit code: 0
I0814 10:48:13.384] Recording: run_client_config_tests
I0814 10:48:13.384] Running command: run_client_config_tests
I0814 10:48:13.402] 
I0814 10:48:13.403] +++ Running case: test-cmd.run_client_config_tests 
I0814 10:48:13.406] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:48:13.408] +++ command: run_client_config_tests
I0814 10:48:13.420] +++ [0814 10:48:13] Creating namespace namespace-1565779693-11691
I0814 10:48:13.493] namespace/namespace-1565779693-11691 created
I0814 10:48:13.563] Context "test" modified.
I0814 10:48:13.569] +++ [0814 10:48:13] Testing client config
I0814 10:48:13.639] Successful
I0814 10:48:13.639] message:error: stat missing: no such file or directory
I0814 10:48:13.640] has:missing: no such file or directory
I0814 10:48:13.705] Successful
I0814 10:48:13.706] message:error: stat missing: no such file or directory
I0814 10:48:13.706] has:missing: no such file or directory
I0814 10:48:13.774] Successful
I0814 10:48:13.774] message:error: stat missing: no such file or directory
I0814 10:48:13.774] has:missing: no such file or directory
I0814 10:48:13.843] Successful
I0814 10:48:13.844] message:Error in configuration: context was not found for specified context: missing-context
I0814 10:48:13.844] has:context was not found for specified context: missing-context
I0814 10:48:13.916] Successful
I0814 10:48:13.917] message:error: no server found for cluster "missing-cluster"
I0814 10:48:13.917] has:no server found for cluster "missing-cluster"
I0814 10:48:13.989] Successful
I0814 10:48:13.989] message:error: auth info "missing-user" does not exist
I0814 10:48:13.989] has:auth info "missing-user" does not exist
W0814 10:48:14.090] E0814 10:48:13.973259   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:14.090] E0814 10:48:14.071361   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:14.174] E0814 10:48:14.174153   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:14.275] Successful
I0814 10:48:14.275] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 10:48:14.275] has:error loading config file
I0814 10:48:14.275] Successful
I0814 10:48:14.276] message:error: stat missing-config: no such file or directory
I0814 10:48:14.276] has:no such file or directory
I0814 10:48:14.276] +++ exit code: 0
I0814 10:48:14.276] Recording: run_service_accounts_tests
I0814 10:48:14.276] Running command: run_service_accounts_tests
I0814 10:48:14.276] 
I0814 10:48:14.276] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0814 10:48:14.603] (Bnamespace/test-service-accounts created
I0814 10:48:14.693] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 10:48:14.768] (Bserviceaccount/test-service-account created
I0814 10:48:14.863] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 10:48:14.940] (Bserviceaccount "test-service-account" deleted
I0814 10:48:15.027] namespace "test-service-accounts" deleted
W0814 10:48:15.128] E0814 10:48:14.279597   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:15.128] E0814 10:48:14.975156   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:15.129] E0814 10:48:15.072938   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:15.176] E0814 10:48:15.175835   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:15.282] E0814 10:48:15.281663   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:15.977] E0814 10:48:15.976828   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:16.075] E0814 10:48:16.074518   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:16.178] E0814 10:48:16.177667   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:16.284] E0814 10:48:16.283394   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:16.979] E0814 10:48:16.978620   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:17.076] E0814 10:48:17.076233   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:17.180] E0814 10:48:17.179356   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:17.285] E0814 10:48:17.285078   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:17.981] E0814 10:48:17.980420   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:18.078] E0814 10:48:18.077845   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:18.181] E0814 10:48:18.180854   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:18.287] E0814 10:48:18.286674   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:18.982] E0814 10:48:18.981907   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:19.080] E0814 10:48:19.079477   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:19.183] E0814 10:48:19.182557   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:19.289] E0814 10:48:19.288497   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:19.984] E0814 10:48:19.983459   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:20.081] E0814 10:48:20.080322   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:20.181] +++ exit code: 0
I0814 10:48:20.182] Recording: run_job_tests
I0814 10:48:20.183] Running command: run_job_tests
I0814 10:48:20.203] 
I0814 10:48:20.205] +++ Running case: test-cmd.run_job_tests 
I0814 10:48:20.208] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0814 10:48:20.983] Labels:                        run=pi
I0814 10:48:20.983] Annotations:                   <none>
I0814 10:48:20.983] Schedule:                      59 23 31 2 *
I0814 10:48:20.984] Concurrency Policy:            Allow
I0814 10:48:20.984] Suspend:                       False
I0814 10:48:20.984] Successful Job History Limit:  3
I0814 10:48:20.984] Failed Job History Limit:      1
I0814 10:48:20.984] Starting Deadline Seconds:     <unset>
I0814 10:48:20.984] Selector:                      <unset>
I0814 10:48:20.984] Parallelism:                   <unset>
I0814 10:48:20.984] Completions:                   <unset>
I0814 10:48:20.984] Pod Template:
I0814 10:48:20.984]   Labels:  run=pi
... skipping 32 lines ...
I0814 10:48:21.511]                 run=pi
I0814 10:48:21.511] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 10:48:21.511] Controlled By:  CronJob/pi
I0814 10:48:21.511] Parallelism:    1
I0814 10:48:21.511] Completions:    1
I0814 10:48:21.511] Start Time:     Wed, 14 Aug 2019 10:48:21 +0000
I0814 10:48:21.512] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 10:48:21.512] Pod Template:
I0814 10:48:21.512]   Labels:  controller-uid=e65a9ef1-53e2-4846-80b4-106d49a1e343
I0814 10:48:21.512]            job-name=test-job
I0814 10:48:21.512]            run=pi
I0814 10:48:21.512]   Containers:
I0814 10:48:21.512]    pi:
... skipping 15 lines ...
I0814 10:48:21.513]   Type    Reason            Age   From            Message
I0814 10:48:21.513]   ----    ------            ----  ----            -------
I0814 10:48:21.514]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-8h5ts
I0814 10:48:21.592] job.batch "test-job" deleted
I0814 10:48:21.675] cronjob.batch "pi" deleted
I0814 10:48:21.756] namespace "test-jobs" deleted
W0814 10:48:21.857] E0814 10:48:20.184728   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:21.858] E0814 10:48:20.290033   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:21.859] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 10:48:21.859] E0814 10:48:20.984989   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:21.860] E0814 10:48:21.081924   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:21.860] E0814 10:48:21.186223   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:21.861] I0814 10:48:21.247329   53084 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"e65a9ef1-53e2-4846-80b4-106d49a1e343", APIVersion:"batch/v1", ResourceVersion:"1348", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-8h5ts
W0814 10:48:21.861] E0814 10:48:21.291451   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:21.987] E0814 10:48:21.986864   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:22.084] E0814 10:48:22.083799   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:22.188] E0814 10:48:22.187795   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:22.293] E0814 10:48:22.293207   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:22.989] E0814 10:48:22.988936   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:23.086] E0814 10:48:23.086068   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:23.190] E0814 10:48:23.189351   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:23.295] E0814 10:48:23.294853   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:23.991] E0814 10:48:23.990741   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:24.088] E0814 10:48:24.087773   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:24.191] E0814 10:48:24.191068   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:24.297] E0814 10:48:24.296424   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:24.993] E0814 10:48:24.992430   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:25.090] E0814 10:48:25.089708   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:25.193] E0814 10:48:25.192730   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:25.298] E0814 10:48:25.298180   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:25.995] E0814 10:48:25.994211   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:26.092] E0814 10:48:26.091339   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:26.195] E0814 10:48:26.194328   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:26.300] E0814 10:48:26.299886   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:26.896] +++ exit code: 0
I0814 10:48:26.932] Recording: run_create_job_tests
I0814 10:48:26.933] Running command: run_create_job_tests
I0814 10:48:26.953] 
I0814 10:48:26.955] +++ Running case: test-cmd.run_create_job_tests 
I0814 10:48:26.957] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0814 10:48:28.298] +++ [0814 10:48:28] Testing pod templates
I0814 10:48:28.393] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:48:28.562] (Bpodtemplate/nginx created
I0814 10:48:28.658] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 10:48:28.733] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 10:48:28.734] nginx   nginx        nginx    name=nginx
W0814 10:48:28.834] E0814 10:48:26.995891   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.835] E0814 10:48:27.092956   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.835] I0814 10:48:27.195795   53084 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565779706-7373", Name:"test-job", UID:"5673840a-ce63-408d-9257-942553f3565f", APIVersion:"batch/v1", ResourceVersion:"1366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-6mdxz
W0814 10:48:28.835] E0814 10:48:27.196220   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.835] E0814 10:48:27.301335   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.836] I0814 10:48:27.445774   53084 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565779706-7373", Name:"test-job-pi", UID:"bd1aa41a-6e66-4ccd-b88a-e942cd377eb3", APIVersion:"batch/v1", ResourceVersion:"1373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-jljgk
W0814 10:48:28.836] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 10:48:28.836] I0814 10:48:27.809975   53084 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565779706-7373", Name:"my-pi", UID:"040b23a4-180e-4161-95bf-8d5b4b90232b", APIVersion:"batch/v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-62kzb
W0814 10:48:28.836] E0814 10:48:27.997675   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.837] E0814 10:48:28.095161   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.837] E0814 10:48:28.198869   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.837] E0814 10:48:28.303517   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:28.837] I0814 10:48:28.558227   49631 controller.go:606] quota admission added evaluator for: podtemplates
I0814 10:48:28.938] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 10:48:28.993] (Bpodtemplate "nginx" deleted
I0814 10:48:29.091] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 10:48:29.104] (B+++ exit code: 0
I0814 10:48:29.138] Recording: run_service_tests
... skipping 3 lines ...
I0814 10:48:29.164] +++ working dir: /go/src/k8s.io/kubernetes
I0814 10:48:29.166] +++ command: run_service_tests
I0814 10:48:29.242] Context "test" modified.
I0814 10:48:29.249] +++ [0814 10:48:29] Testing kubectl(v1:services)
I0814 10:48:29.345] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 10:48:29.567] (Bservice/redis-master created
W0814 10:48:29.668] E0814 10:48:28.999024   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:29.669] E0814 10:48:29.096655   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:29.670] E0814 10:48:29.200515   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:29.670] E0814 10:48:29.305159   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:29.771] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 10:48:29.820] (Bcore.sh:864: Successful describe services redis-master:
I0814 10:48:29.820] Name:              redis-master
I0814 10:48:29.821] Namespace:         default
I0814 10:48:29.821] Labels:            app=redis
I0814 10:48:29.821]                    role=master
... skipping 51 lines ...
I0814 10:48:30.114] Port:              <unset>  6379/TCP
I0814 10:48:30.114] TargetPort:        6379/TCP
I0814 10:48:30.114] Endpoints:         <none>
I0814 10:48:30.114] Session Affinity:  None
I0814 10:48:30.114] Events:            <none>
I0814 10:48:30.115] (B
W0814 10:48:30.215] E0814 10:48:30.000413   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:30.216] E0814 10:48:30.098017   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:30.216] E0814 10:48:30.201741   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:30.307] E0814 10:48:30.307059   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:30.408] Successful describe services:
I0814 10:48:30.408] Name:              kubernetes
I0814 10:48:30.408] Namespace:         default
I0814 10:48:30.409] Labels:            component=apiserver
I0814 10:48:30.409]                    provider=kubernetes
I0814 10:48:30.409] Annotations:       <none>
... skipping 238 lines ...
I0814 10:48:31.231]   selector:
I0814 10:48:31.231]     role: padawan
I0814 10:48:31.231]   sessionAffinity: None
I0814 10:48:31.231]   type: ClusterIP
I0814 10:48:31.231] status:
I0814 10:48:31.231]   loadBalancer: {}
W0814 10:48:31.331] E0814 10:48:31.002004   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:31.332] E0814 10:48:31.099542   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:31.332] E0814 10:48:31.203409   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:31.332] error: you must specify resources by --filename when --local is set.
W0814 10:48:31.332] Example resource specifications include:
W0814 10:48:31.332]    '-f rsrc.yaml'
W0814 10:48:31.333]    '--filename=rsrc.json'
W0814 10:48:31.333] E0814 10:48:31.308790   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:31.433] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 10:48:31.563] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 10:48:31.644] (Bservice "redis-master" deleted
I0814 10:48:31.737] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 10:48:31.828] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 10:48:31.981] (Bservice/redis-master created
... skipping 5 lines ...
I0814 10:48:32.696] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 10:48:32.780] (Bservice "redis-master" deleted
I0814 10:48:32.864] service "service-v1-test" deleted
I0814 10:48:32.958] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 10:48:33.049] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 10:48:33.201] (Bservice/redis-master created
W0814 10:48:33.302] E0814 10:48:32.003519   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.303] E0814 10:48:32.101156   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.304] E0814 10:48:32.204902   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.304] E0814 10:48:32.310821   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.305] E0814 10:48:33.005117   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.305] E0814 10:48:33.102637   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.306] E0814 10:48:33.206127   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:33.312] E0814 10:48:33.312301   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 10:48:33.413] service/redis-slave created
I0814 10:48:33.464] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 10:48:33.550] (BSuccessful
I0814 10:48:33.551] message:NAME           RSRC
I0814 10:48:33.551] kubernetes     144
I0814 10:48:33.551] redis-master   1415
... skipping 84 lines ...
I0814 10:48:38.463] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 10:48:38.553] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0814 10:48:38.661] (Bdaemonset.apps/bind rolled back
I0814 10:48:38.755] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 10:48:38.847] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 10:48:38.953] (BSuccessful
I0814 10:48:38.953] message:error: unable to find specified revision 1000000 in history
I0814 10:48:38.953] has:unable to find specified revision
I0814 10:48:39.041] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 10:48:39.136] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 10:48:39.236] (Bdaemonset.apps/bind rolled back
I0814 10:48:39.333] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 10:48:39.426] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0814 10:48:40.758] Namespace:    namespace-1565779719-14928
I0814 10:48:40.758] Selector:     app=guestbook,tier=frontend
I0814 10:48:40.758] Labels:       app=guestbook
I0814 10:48:40.758]               tier=frontend
I0814 10:48:40.759] Annotations:  <none>
I0814 10:48:40.759] Replicas:     3 current / 3 desired
I0814 10:48:40.759] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 10:48:40.759] Pod Template:
I0814 10:48:40.759]   Labels:  app=guestbook
I0814 10:48:40.759]            tier=frontend
I0814 10:48:40.759]   Containers:
I0814 10:48:40.759]    php-redis:
I0814 10:48:40.759]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0814 10:48:40.870] Namespace:    namespace-1565779719-14928
I0814 10:48:40.870] Selector:     app=guestbook,tier=frontend
I0814 10:48:40.870] Labels:       app=guestbook
I0814 10:48:40.870]               tier=frontend
I0814 10:48:40.870] Annotations:  <none>
I0814 10:48:40.870] Replicas:     3 current / 3 desired
I0814 10:48:40.871] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 10:48:40.871] Pod Template:
I0814 10:48:40.871]   Labels:  app=guestbook
I0814 10:48:40.871]            tier=frontend
I0814 10:48:40.871]   Containers:
I0814 10:48:40.871]    php-redis:
I0814 10:48:40.871]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0814 10:48:40.977] Namespace:    namespace-1565779719-14928
I0814 10:48:40.977] Selector:     app=guestbook,tier=frontend
I0814 10:48:40.977] Labels:       app=guestbook
I0814 10:48:40.978]               tier=frontend
I0814 10:48:40.978] Annotations:  <none>
I0814 10:48:40.978] Replicas:     3 current / 3 desired
I0814 10:48:40.978] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0814 10:48:40.978] Pod Template:
I0814 10:48:40.978]   Labels:  app=guestbook
I0814 10:48:40.979]            tier=frontend
I0814 10:48:40.979]   Containers:
I0814 10:48:40.979]    php-redis:
I0814 10:48:40.979]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 4 lines ...
I0814 10:48:40.980]       memory:  100Mi
I0814 10:48:40.980]     Environment:
I0814 10:48:40.980]       GET_HOSTS_FROM:  dns
I0814 10:48:40.980]     Mounts:            <none>
I0814 10:48:40.980]   Volumes:             <none>
I0814 10:48:40.980] (B
W0814 10:48:41.081] E0814 10:48:34.006351   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.081] E0814 10:48:34.104194   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.082] E0814 10:48:34.207510   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.082] E0814 10:48:34.313979   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.082] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 10:48:41.083] I0814 10:48:34.533683   53084 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"5c478cbd-be96-4700-b0bb-5943aeabc08d", APIVersion:"apps/v1", ResourceVersion:"1431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0814 10:48:41.083] I0814 10:48:34.541096   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"df484903-4d10-43b9-9326-dede25c7e38a", APIVersion:"apps/v1", ResourceVersion:"1432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-5t62r
W0814 10:48:41.083] I0814 10:48:34.544457   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"df484903-4d10-43b9-9326-dede25c7e38a", APIVersion:"apps/v1", ResourceVersion:"1432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-hlgb7
W0814 10:48:41.084] E0814 10:48:35.007821   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.084] E0814 10:48:35.105212   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.084] E0814 10:48:35.208888   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.084] E0814 10:48:35.315382   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.085] I0814 10:48:35.605302   49631 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0814 10:48:41.085] I0814 10:48:35.618030   49631 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0814 10:48:41.085] E0814 10:48:36.009510   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.085] E0814 10:48:36.106767   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.086] E0814 10:48:36.210496   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.086] E0814 10:48:36.316704   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.086] E0814 10:48:37.011741   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.086] E0814 10:48:37.108256   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.087] E0814 10:48:37.212285   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.087] E0814 10:48:37.318148   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.087] E0814 10:48:38.013241   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.088] E0814 10:48:38.109833   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.088] E0814 10:48:38.213825   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.088] E0814 10:48:38.319729   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.088] E0814 10:48:39.014819   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.089] E0814 10:48:39.111267   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.089] E0814 10:48:39.214950   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.094] E0814 10:48:39.258430   53084 daemon_controller.go:302] namespace-1565779716-25485/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1565779716-25485", SelfLink:"/apis/apps/v1/namespaces/namespace-1565779716-25485/daemonsets/bind", UID:"f0d48c4b-284f-4251-ad6e-ab138a633ce6", ResourceVersion:"1500", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701376517, loc:(*time.Location)(0x7213220)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1565779716-25485\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00115f840), Fields:(*v1.Fields)(0xc00115f860)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00115f880), Fields:(*v1.Fields)(0xc00115f8c0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00115f920), Fields:(*v1.Fields)(0xc00115f960)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00115f980), Fields:(*v1.Fields)(0xc00115f9c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00115f9e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00247b7f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0023f6420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00115fa00), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001796300)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00247b84c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0814 10:48:41.094] E0814 10:48:39.321098   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.094] E0814 10:48:40.017020   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.095] I0814 10:48:40.083867   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779719-14928", Name:"frontend", UID:"c8d15fcc-ea25-416b-aa47-ac14f848d53a", APIVersion:"v1", ResourceVersion:"1508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qlggv
W0814 10:48:41.095] I0814 10:48:40.087931   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779719-14928", Name:"frontend", UID:"c8d15fcc-ea25-416b-aa47-ac14f848d53a", APIVersion:"v1", ResourceVersion:"1508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zh4b2
W0814 10:48:41.095] I0814 10:48:40.088418   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779719-14928", Name:"frontend", UID:"c8d15fcc-ea25-416b-aa47-ac14f848d53a", APIVersion:"v1", ResourceVersion:"1508", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hs2rn
W0814 10:48:41.096] E0814 10:48:40.112917   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.096] E0814 10:48:40.216709   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.096] E0814 10:48:40.322733   53084 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 10:48:41.097] I0814 10:48:40.521577   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779719-14928", Name:"frontend", UID:"088e1f0a-697a-49dc-99f5-fddf7fbc1712", APIVersion:"v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2lx5z
W0814 10:48:41.097] I0814 10:48:40.525885   53084 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565779719-14928", Name:"frontend", UID:"088e1f0a-697a-49dc-99f5-fddf7fbc1712", APIVersion:"v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rjqt6
W0814 10: