This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 08:57
Elapsed27m22s
Revision
Buildergke-prow-ssd-pool-1a225945-cx5g
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/82c342e6-f6c2-453d-b09a-6f3a6817ec77/targets/test'}}
pod75fe6fea-be71-11e9-854b-e2ddb7348457
resultstorehttps://source.cloud.google.com/results/invocations/82c342e6-f6c2-453d-b09a-6f3a6817ec77/targets/test
infra-commit89e6e9743
pod75fe6fea-be71-11e9-854b-e2ddb7348457
repok8s.io/kubernetes
repo-commit1f6cb3cb9def97320a5412dcbea1661edd95c29e
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 09:20:22.726743  110351 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 09:20:22.726767  110351 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 09:20:22.726779  110351 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 09:20:22.726788  110351 master.go:234] Using reconciler: 
I0814 09:20:22.731144  110351 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.731952  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.731973  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.732238  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.732307  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.733092  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.733760  110351 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 09:20:22.733798  110351 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.733854  110351 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 09:20:22.734024  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.734036  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.734069  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.734122  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.734722  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.735088  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.735202  110351 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 09:20:22.735229  110351 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.735297  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.735308  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.735338  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.735383  110351 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 09:20:22.735377  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.735547  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.735602  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.736807  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.737723  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.737780  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.737892  110351 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 09:20:22.737922  110351 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.737966  110351 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 09:20:22.737982  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.737992  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.738021  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.738080  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.738848  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.739695  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.740230  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.740294  110351 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 09:20:22.741038  110351 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.741243  110351 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 09:20:22.741257  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.741272  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.741335  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.741744  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.742558  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.742992  110351 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 09:20:22.743494  110351 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.743669  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.743684  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.743857  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.743933  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.743970  110351 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 09:20:22.744043  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.745206  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.745372  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.745518  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.745556  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.745927  110351 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 09:20:22.746151  110351 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.746220  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.746230  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.746262  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.746305  110351 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 09:20:22.746435  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.746947  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.758485  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.758672  110351 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 09:20:22.758694  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.758753  110351 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 09:20:22.758805  110351 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.758865  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.758873  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.758899  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.758996  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.759280  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.759364  110351 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 09:20:22.759533  110351 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.759622  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.759634  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.759670  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.759739  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.759789  110351 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 09:20:22.759992  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.760239  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.760319  110351 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 09:20:22.760454  110351 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.760524  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.760535  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.760561  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.760621  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.760663  110351 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 09:20:22.760852  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.761175  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.761246  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.761296  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.761394  110351 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 09:20:22.761515  110351 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.761601  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.761614  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.761645  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.761690  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.761721  110351 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 09:20:22.761858  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.762120  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.762231  110351 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 09:20:22.762368  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.762428  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.762439  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.762469  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.762500  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.762530  110351 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 09:20:22.762767  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.763032  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.763152  110351 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 09:20:22.763296  110351 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.763369  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.763380  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.763410  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.763459  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.763491  110351 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 09:20:22.763678  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.764024  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.764109  110351 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 09:20:22.764226  110351 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.764290  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.764299  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.764303  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.764327  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.764382  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.764411  110351 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 09:20:22.764545  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.764626  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.764694  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.764943  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.765212  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.765280  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.765297  110351 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 09:20:22.765317  110351 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 09:20:22.765319  110351 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.765404  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.765416  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.765440  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.765508  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.766994  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.767349  110351 watch_cache.go:405] Replace watchCache (rev: 28601) 
I0814 09:20:22.767821  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.767904  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.767913  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.767939  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.767973  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.768021  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.768519  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.768696  110351 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.768766  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.768777  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.768811  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.768858  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.768898  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.769121  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.769220  110351 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 09:20:22.769748  110351 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.769918  110351 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.770650  110351 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.771250  110351 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.771753  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.771839  110351 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 09:20:22.773158  110351 watch_cache.go:405] Replace watchCache (rev: 28602) 
I0814 09:20:22.774158  110351 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.774764  110351 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.775191  110351 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.775292  110351 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.775517  110351 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.775983  110351 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.776758  110351 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.777002  110351 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.777839  110351 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.778205  110351 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.778976  110351 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.779325  110351 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.780268  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.780662  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.780884  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.781043  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.781346  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.781531  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.781860  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.782705  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.783076  110351 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.784010  110351 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.785059  110351 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.786407  110351 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.787703  110351 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.794784  110351 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.795045  110351 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.795900  110351 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.796804  110351 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.797376  110351 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.799021  110351 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.799379  110351 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.799599  110351 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 09:20:22.799685  110351 master.go:434] Enabling API group "authentication.k8s.io".
I0814 09:20:22.799735  110351 master.go:434] Enabling API group "authorization.k8s.io".
I0814 09:20:22.800086  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.800265  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.800367  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.800457  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.800574  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.801546  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.801708  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.802072  110351 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 09:20:22.802172  110351 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 09:20:22.802941  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.803086  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.803151  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.803173  110351 watch_cache.go:405] Replace watchCache (rev: 28602) 
I0814 09:20:22.803211  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.803449  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.804748  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.804847  110351 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 09:20:22.804962  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.805046  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.805055  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.805082  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.805116  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.805119  110351 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 09:20:22.805229  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.806474  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.806565  110351 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 09:20:22.806600  110351 master.go:434] Enabling API group "autoscaling".
I0814 09:20:22.806724  110351 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.806793  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.806803  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.806832  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.806878  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.806909  110351 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 09:20:22.807109  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.807350  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.807451  110351 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 09:20:22.807566  110351 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.807647  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.807657  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.807685  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.807728  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.807760  110351 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 09:20:22.807892  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.810436  110351 watch_cache.go:405] Replace watchCache (rev: 28602) 
I0814 09:20:22.810479  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.810512  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.810648  110351 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 09:20:22.810683  110351 master.go:434] Enabling API group "batch".
I0814 09:20:22.810830  110351 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.810863  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.810910  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.810930  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.810958  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.811005  110351 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 09:20:22.811016  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.811307  110351 watch_cache.go:405] Replace watchCache (rev: 28602) 
I0814 09:20:22.812100  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.812174  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.812312  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.812338  110351 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 09:20:22.812359  110351 master.go:434] Enabling API group "certificates.k8s.io".
I0814 09:20:22.812765  110351 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.812832  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.812844  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.812874  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.812401  110351 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 09:20:22.813307  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.813792  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.813803  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.813894  110351 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 09:20:22.813951  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.814022  110351 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 09:20:22.814034  110351 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.814097  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.814108  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.814141  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.814198  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.814532  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.814651  110351 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 09:20:22.814673  110351 master.go:434] Enabling API group "coordination.k8s.io".
I0814 09:20:22.814879  110351 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.814947  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.814958  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.815029  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.815081  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.815125  110351 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 09:20:22.815242  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.815388  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.815662  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.815763  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.815789  110351 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 09:20:22.815809  110351 master.go:434] Enabling API group "extensions".
I0814 09:20:22.815867  110351 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 09:20:22.815984  110351 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.816064  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.816074  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.816101  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.816197  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.816424  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.816459  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.816537  110351 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 09:20:22.816567  110351 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 09:20:22.816680  110351 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.816742  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.816756  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.816788  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.818003  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.818027  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.818321  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.819171  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.819431  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.819561  110351 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 09:20:22.819601  110351 master.go:434] Enabling API group "networking.k8s.io".
I0814 09:20:22.819637  110351 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.819748  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.819767  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.819819  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.819868  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.819899  110351 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 09:20:22.820097  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.820361  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.820456  110351 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 09:20:22.820470  110351 master.go:434] Enabling API group "node.k8s.io".
I0814 09:20:22.820733  110351 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.820803  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.820814  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.820856  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.820939  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.820968  110351 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 09:20:22.821144  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.821617  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.821712  110351 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 09:20:22.822032  110351 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.822109  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.822121  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.822164  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.822241  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.822270  110351 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 09:20:22.822498  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.822732  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.822825  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.822923  110351 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 09:20:22.822959  110351 master.go:434] Enabling API group "policy".
I0814 09:20:22.822988  110351 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.823045  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.823056  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.823067  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.823103  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.823146  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.823243  110351 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 09:20:22.823346  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.823898  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.824005  110351 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 09:20:22.824143  110351 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.824215  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.824230  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.824287  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.824324  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.824350  110351 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 09:20:22.824500  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.824750  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.824956  110351 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 09:20:22.824991  110351 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.825091  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.825105  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.825155  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.825204  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.825240  110351 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 09:20:22.825570  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.827641  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.827647  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.827975  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.828227  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.828366  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.828446  110351 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 09:20:22.828623  110351 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.828701  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.828711  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.828738  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.828789  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.828816  110351 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 09:20:22.828944  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.829994  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.830131  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.830269  110351 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 09:20:22.830305  110351 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.830340  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.830383  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.830395  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.830429  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.830479  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.830522  110351 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 09:20:22.830715  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.830820  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.830823  110351 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 09:20:22.830841  110351 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 09:20:22.831035  110351 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.831100  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.831109  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.831137  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.831278  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.831499  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.831765  110351 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 09:20:22.831793  110351 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.831860  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.831872  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.831900  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.831935  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.831961  110351 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 09:20:22.832201  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.832457  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.832727  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.832810  110351 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 09:20:22.832989  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.832977  110351 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.833049  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.833058  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.833083  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.833126  110351 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 09:20:22.833209  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.833292  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.833498  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.833539  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.833558  110351 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 09:20:22.833607  110351 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 09:20:22.834854  110351 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 09:20:22.837730  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.837906  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.838053  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.838614  110351 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.838704  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.838715  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.838773  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.838837  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.839073  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.839160  110351 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 09:20:22.839293  110351 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.839365  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.839377  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.839403  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.839448  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.839492  110351 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 09:20:22.839688  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.840003  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.840080  110351 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 09:20:22.840122  110351 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 09:20:22.840248  110351 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 09:20:22.840424  110351 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.840493  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.840505  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.840541  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.840637  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.840664  110351 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 09:20:22.840705  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.840836  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.841068  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.841147  110351 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 09:20:22.841402  110351 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.841480  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.841489  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.841559  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.841628  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.841679  110351 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 09:20:22.841836  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.842073  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.842100  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.842149  110351 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 09:20:22.842180  110351 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.842235  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.842244  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.842289  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.842319  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.842325  110351 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 09:20:22.842469  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.843514  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.843786  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.843803  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.843930  110351 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 09:20:22.843964  110351 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.844025  110351 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 09:20:22.844036  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.844046  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.844073  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.844255  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.844415  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.845219  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.845295  110351 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 09:20:22.845423  110351 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.845486  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.845494  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.845519  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.845604  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.845640  110351 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 09:20:22.845850  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.846089  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.846162  110351 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 09:20:22.846421  110351 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.846481  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.846490  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.846517  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.846601  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.846634  110351 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 09:20:22.846948  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.847188  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.847261  110351 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 09:20:22.847275  110351 master.go:434] Enabling API group "storage.k8s.io".
I0814 09:20:22.847431  110351 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.847514  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.847523  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.847550  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.847612  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.847638  110351 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 09:20:22.847846  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.847926  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.848360  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.848372  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.848481  110351 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 09:20:22.848645  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.848648  110351 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.848710  110351 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 09:20:22.848714  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.848829  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.848898  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.848954  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.850400  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.850824  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.850932  110351 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 09:20:22.851107  110351 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.851171  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.851180  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.851210  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.851251  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.851278  110351 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 09:20:22.851490  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.851792  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.851898  110351 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 09:20:22.852074  110351 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.852140  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.852149  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.852206  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.852266  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.852294  110351 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 09:20:22.852489  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.852782  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.853044  110351 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 09:20:22.853198  110351 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.853295  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.853305  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.853337  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.853371  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.853434  110351 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 09:20:22.853696  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.853936  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.854006  110351 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 09:20:22.854039  110351 master.go:434] Enabling API group "apps".
I0814 09:20:22.854066  110351 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.854134  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.854144  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.854171  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.854205  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.854236  110351 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 09:20:22.854463  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.854769  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.854843  110351 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 09:20:22.854865  110351 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.854920  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.854929  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.854977  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.855062  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.855089  110351 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 09:20:22.855299  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.858048  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.858076  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.858375  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.858491  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.858600  110351 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 09:20:22.858629  110351 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.858677  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.858693  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.858702  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.858751  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.858797  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.858821  110351 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 09:20:22.859039  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.859362  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.859448  110351 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 09:20:22.859490  110351 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.859564  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.859576  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.859640  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.859655  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.859679  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.859712  110351 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 09:20:22.859931  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.860149  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.860208  110351 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 09:20:22.860224  110351 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 09:20:22.860248  110351 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.860417  110351 client.go:354] parsed scheme: ""
I0814 09:20:22.860430  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:22.860471  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:22.860531  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.860558  110351 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 09:20:22.860787  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.861053  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.861470  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:22.861563  110351 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 09:20:22.861576  110351 master.go:434] Enabling API group "events.k8s.io".
I0814 09:20:22.861829  110351 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.862050  110351 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.862399  110351 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.862527  110351 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.862648  110351 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.862729  110351 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.862931  110351 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.863021  110351 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.863115  110351 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.863204  110351 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.864161  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.864446  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.865666  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.866884  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:22.867020  110351 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 09:20:22.867671  110351 watch_cache.go:405] Replace watchCache (rev: 28604) 
I0814 09:20:22.867917  110351 watch_cache.go:405] Replace watchCache (rev: 28604) 
I0814 09:20:22.867966  110351 watch_cache.go:405] Replace watchCache (rev: 28604) 
I0814 09:20:22.868321  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.870342  110351 watch_cache.go:405] Replace watchCache (rev: 28603) 
I0814 09:20:22.870452  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.870896  110351 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.872029  110351 watch_cache.go:405] Replace watchCache (rev: 28604) 
I0814 09:20:22.872672  110351 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.873140  110351 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.876407  110351 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.876693  110351 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 09:20:22.876796  110351 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 09:20:22.877569  110351 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.877741  110351 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.878085  110351 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.879139  110351 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.880243  110351 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.881484  110351 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.881954  110351 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.883116  110351 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.884143  110351 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.884699  110351 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.885506  110351 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 09:20:22.885693  110351 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 09:20:22.886809  110351 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.887498  110351 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.888495  110351 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.889635  110351 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.890325  110351 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.891331  110351 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.892242  110351 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.893127  110351 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.893932  110351 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.894765  110351 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.895648  110351 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 09:20:22.895777  110351 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 09:20:22.896782  110351 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.897555  110351 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 09:20:22.897712  110351 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 09:20:22.898494  110351 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.899302  110351 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.899699  110351 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.900351  110351 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.901210  110351 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.901872  110351 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.902668  110351 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 09:20:22.902832  110351 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 09:20:22.903810  110351 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.904576  110351 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.905237  110351 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.906238  110351 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.906501  110351 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.906777  110351 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.907770  110351 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.908127  110351 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.908409  110351 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.909364  110351 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.909632  110351 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.910062  110351 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 09:20:22.910128  110351 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 09:20:22.910136  110351 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 09:20:22.911155  110351 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.911829  110351 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.912682  110351 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.913299  110351 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.914222  110351 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"52dcb649-b0b3-4378-aaf3-f181e25cafad", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 09:20:22.917389  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:22.917419  110351 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 09:20:22.917429  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:22.917439  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:22.917448  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:22.917455  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:22.917510  110351 httplog.go:90] GET /healthz: (221.299µs) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:22.918913  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.302478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.921680  110351 httplog.go:90] GET /api/v1/services: (1.318561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.925656  110351 httplog.go:90] GET /api/v1/services: (1.048564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.930517  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:22.930549  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:22.930562  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:22.930572  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:22.930603  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:22.930644  110351 httplog.go:90] GET /healthz: (210.785µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:22.932233  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.356837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.933109  110351 httplog.go:90] GET /api/v1/services: (912.871µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:22.933954  110351 httplog.go:90] POST /api/v1/namespaces: (1.399908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.935124  110351 httplog.go:90] GET /api/v1/namespaces/kube-public: (814.304µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.937062  110351 httplog.go:90] POST /api/v1/namespaces: (1.535006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.938287  110351 httplog.go:90] GET /api/v1/services: (1.358546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:22.938441  110351 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.04597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:22.940690  110351 httplog.go:90] POST /api/v1/namespaces: (1.684784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.020469  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.020660  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.020809  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.020908  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.020989  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.021252  110351 httplog.go:90] GET /healthz: (914.905µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.031377  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.031576  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.031747  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.031849  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.031930  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.032174  110351 httplog.go:90] GET /healthz: (983.261µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.118298  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.118336  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.118348  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.118357  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.118373  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.118415  110351 httplog.go:90] GET /healthz: (256.589µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.131312  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.131351  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.131364  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.131375  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.131383  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.131410  110351 httplog.go:90] GET /healthz: (226.256µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.218247  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.218286  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.218313  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.218324  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.218332  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.218360  110351 httplog.go:90] GET /healthz: (237.779µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.231313  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.231349  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.231363  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.231374  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.231383  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.231425  110351 httplog.go:90] GET /healthz: (245.361µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.318228  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.318261  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.318273  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.318284  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.318292  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.318320  110351 httplog.go:90] GET /healthz: (222.881µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.331357  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.331395  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.331412  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.331422  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.331429  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.331466  110351 httplog.go:90] GET /healthz: (263.778µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.419206  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.419239  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.419252  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.419262  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.419273  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.419302  110351 httplog.go:90] GET /healthz: (252.969µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.431311  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.431340  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.431351  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.431361  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.431369  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.431404  110351 httplog.go:90] GET /healthz: (228.744µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.518294  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.518329  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.518342  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.518351  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.518359  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.518397  110351 httplog.go:90] GET /healthz: (232.882µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.531523  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.531559  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.531570  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.531597  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.531619  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.531662  110351 httplog.go:90] GET /healthz: (450.536µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.618250  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.618288  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.618300  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.618310  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.618318  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.618384  110351 httplog.go:90] GET /healthz: (287.88µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.631304  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.631344  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.631356  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.631365  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.631373  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.631404  110351 httplog.go:90] GET /healthz: (260.184µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.718239  110351 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 09:20:23.718279  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.718292  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.718301  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.718309  110351 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.718343  110351 httplog.go:90] GET /healthz: (245.61µs) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.726814  110351 client.go:354] parsed scheme: ""
I0814 09:20:23.726845  110351 client.go:354] scheme "" not registered, fallback to default scheme
I0814 09:20:23.726906  110351 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 09:20:23.726986  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:23.727696  110351 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 09:20:23.727774  110351 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 09:20:23.733082  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.733106  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.733116  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.733124  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.733172  110351 httplog.go:90] GET /healthz: (1.264392ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.819239  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.819273  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.819285  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.819309  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.819355  110351 httplog.go:90] GET /healthz: (1.184986ms) 0 [Go-http-client/1.1 127.0.0.1:47346]
I0814 09:20:23.832540  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.832599  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.832615  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.832626  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.832670  110351 httplog.go:90] GET /healthz: (1.513627ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.919335  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.919363  110351 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 09:20:23.919373  110351 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 09:20:23.919381  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 09:20:23.919421  110351 httplog.go:90] GET /healthz: (1.076466ms) 0 [Go-http-client/1.1 127.0.0.1:47592]
I0814 09:20:23.919449  110351 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.398038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:23.919686  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.136945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47346]
I0814 09:20:23.921074  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (940.904µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:23.922069  110351 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.084906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47592]
I0814 09:20:23.922167  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.282069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.922473  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.03157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:23.922990  110351 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 09:20:23.924339  110351 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.119632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47592]
I0814 09:20:23.924492  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.544115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:23.924557  110351 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.869069ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.925675  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (782.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47592]
I0814 09:20:23.926436  110351 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.44495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:23.926613  110351 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 09:20:23.926631  110351 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 09:20:23.927175  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (941.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47592]
I0814 09:20:23.927875  110351 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.776759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.928430  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (960.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47592]
I0814 09:20:23.929651  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (763.708µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.930959  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (880.564µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.932064  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:23.932097  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:23.932125  110351 httplog.go:90] GET /healthz: (978.586µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:23.932227  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (883.516µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.933846  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.255942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.934134  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 09:20:23.935282  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (948.09µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.937278  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.554548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.937488  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 09:20:23.938417  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (696.993µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.940306  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.940531  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 09:20:23.941689  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (875.515µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.943668  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.943927  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 09:20:23.945045  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (886.589µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.946676  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.268601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.946933  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 09:20:23.948073  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (752.269µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.949870  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.950142  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 09:20:23.951215  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (745.811µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.952862  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.266515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.953150  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 09:20:23.954149  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (820.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.955799  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.251484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.955995  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 09:20:23.957144  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (928.295µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.959296  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.585399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.959667  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 09:20:23.960801  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (935.788µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.962722  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.435641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.963144  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 09:20:23.964145  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (758.247µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.966276  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.50017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.966802  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 09:20:23.967938  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (861.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.970219  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.646835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.970684  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 09:20:23.972041  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.101576ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.974648  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.914529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.974997  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 09:20:23.975866  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (674.047µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.977385  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.263747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.977772  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 09:20:23.978864  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (757.842µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.981150  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.981329  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 09:20:23.982294  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (791.212µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.983978  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.984455  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 09:20:23.985458  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (638.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.987376  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.987575  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 09:20:23.988534  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (686.527µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.990524  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.990750  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 09:20:23.991700  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (777.232µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.994386  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.571406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.994701  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 09:20:23.995688  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (760.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.997106  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.132696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.997239  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 09:20:23.998172  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (756.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:23.999849  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.337864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.000214  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 09:20:24.001275  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (812.915µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.002963  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.218243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.003263  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 09:20:24.004502  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.036812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.006311  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.28885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.006484  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 09:20:24.007497  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (749.446µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.009334  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.360101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.009601  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 09:20:24.010576  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (748.517µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.018808  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.018916  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.019108  110351 httplog.go:90] GET /healthz: (1.131279ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:24.032223  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.032252  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.032298  110351 httplog.go:90] GET /healthz: (1.128672ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.046266  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.52577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.046579  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 09:20:24.051438  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (4.081445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.054137  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.281309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.054685  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 09:20:24.055832  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (785.722µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.058258  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.964803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.058576  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 09:20:24.059768  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (866.424µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.061861  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.062357  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 09:20:24.063512  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (943.658µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.066409  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.69297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.066576  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 09:20:24.067829  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.085681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.070024  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.777613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.070455  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 09:20:24.071382  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (749.778µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.074292  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.294522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.074509  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 09:20:24.075988  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.30647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.078819  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.492609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.079235  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 09:20:24.080657  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (948.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.083764  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.460726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.084398  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 09:20:24.085647  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (895.255µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.088388  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.039435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.088767  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 09:20:24.090135  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.03584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.092185  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.733673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.092377  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 09:20:24.094267  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.485459ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.096207  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.343439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.096615  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 09:20:24.097868  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.05615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.100770  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.425053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.101089  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 09:20:24.102404  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.042026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.104990  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.746469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.105252  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 09:20:24.106961  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.437719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.109815  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.35299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.109984  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 09:20:24.110993  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (844.89µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.112531  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.16964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.112900  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 09:20:24.114029  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (831.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.115859  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.226957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.116061  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 09:20:24.117247  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (878.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.118604  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.118631  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.118676  110351 httplog.go:90] GET /healthz: (725.152µs) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:24.119705  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.053886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.119866  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 09:20:24.121600  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.09933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.123486  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.424249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.123711  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 09:20:24.125080  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.11069ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.126923  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.50609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.127425  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 09:20:24.128326  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (706.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.130175  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.454761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.130376  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 09:20:24.131605  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.003984ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.133370  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.133393  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.133420  110351 httplog.go:90] GET /healthz: (1.369219ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.133477  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.453061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.133692  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 09:20:24.134758  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (752.985µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.136216  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.152791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.136382  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 09:20:24.138819  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (2.286948ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.141478  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.19777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.141976  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 09:20:24.143276  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.075067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.145545  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.562811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.146618  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 09:20:24.147559  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (729.444µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.149935  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.854424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.150167  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 09:20:24.151112  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (752.232µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.152940  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.153180  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 09:20:24.158232  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (852.823µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.179703  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.222768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.180203  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 09:20:24.198790  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.232422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.219663  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.230958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.220018  110351 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 09:20:24.220676  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.220699  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.220730  110351 httplog.go:90] GET /healthz: (952.964µs) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:24.232401  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.232449  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.232709  110351 httplog.go:90] GET /healthz: (1.446662ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.238848  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.452783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.260005  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.57706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.260283  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 09:20:24.278827  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.368947ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.300698  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.064773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.301363  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 09:20:24.319303  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.319348  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.319387  110351 httplog.go:90] GET /healthz: (1.365784ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:24.319442  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.936091ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.332311  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.332351  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.332409  110351 httplog.go:90] GET /healthz: (1.262236ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.339755  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.32889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.340204  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 09:20:24.359026  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.510873ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.379442  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.957289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.379757  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 09:20:24.398669  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.25432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.418978  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.419021  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.419069  110351 httplog.go:90] GET /healthz: (924.805µs) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:24.420288  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.869181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.420765  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 09:20:24.432295  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.432411  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.432642  110351 httplog.go:90] GET /healthz: (1.464327ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.438660  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.146157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.460113  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.658832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.460363  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 09:20:24.478833  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.391791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.499515  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.002313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.499927  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 09:20:24.519257  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.80451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.519657  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.519749  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.519883  110351 httplog.go:90] GET /healthz: (1.896764ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:24.533546  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.533819  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.534001  110351 httplog.go:90] GET /healthz: (1.697264ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.540012  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.579653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.540348  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 09:20:24.560141  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.400044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.579450  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.022522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.579725  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 09:20:24.598855  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.454056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.619378  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.619560  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.619627  110351 httplog.go:90] GET /healthz: (1.612373ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:24.620149  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.62387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.620498  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 09:20:24.632712  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.632745  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.632784  110351 httplog.go:90] GET /healthz: (1.514092ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.638671  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.25606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.659875  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.441167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.660118  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 09:20:24.678821  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.380982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.700155  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.641071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.700394  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 09:20:24.719350  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.719384  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.719421  110351 httplog.go:90] GET /healthz: (1.351909ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:24.720206  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.757652ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.732466  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.732501  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.732534  110351 httplog.go:90] GET /healthz: (1.363377ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.739187  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.802208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.739388  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 09:20:24.758897  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.412491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.780194  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.503322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.780475  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 09:20:24.799089  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.635322ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.819846  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.819888  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.819924  110351 httplog.go:90] GET /healthz: (1.63732ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:24.820279  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.83526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.820463  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 09:20:24.832001  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.832034  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.832093  110351 httplog.go:90] GET /healthz: (948.48µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.838835  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.322683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.859774  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.311957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.860246  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 09:20:24.878720  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.164225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.899917  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.254501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.900144  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 09:20:24.920059  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.391115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:24.921962  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.921990  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.922037  110351 httplog.go:90] GET /healthz: (3.739033ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:24.932250  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:24.932278  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:24.932350  110351 httplog.go:90] GET /healthz: (1.187976ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.939392  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.970821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.939753  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 09:20:24.958927  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.329205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.979749  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.21683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:24.980170  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 09:20:24.998907  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.417702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.019652  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.019683  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.019748  110351 httplog.go:90] GET /healthz: (1.705694ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:25.021431  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.953494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.021669  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 09:20:25.032360  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.032388  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.032425  110351 httplog.go:90] GET /healthz: (1.225943ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.039386  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.305803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.059900  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.401653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.060154  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 09:20:25.078930  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.416511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.102054  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.323012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.102304  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 09:20:25.119045  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.119365  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.119866  110351 httplog.go:90] GET /healthz: (1.875202ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:25.120134  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (2.671346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.132904  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.132938  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.133138  110351 httplog.go:90] GET /healthz: (1.817091ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.139745  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.284061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.140306  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 09:20:25.158928  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.377971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.179440  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.915093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.179756  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 09:20:25.198734  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.278503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.219661  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.219693  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.219757  110351 httplog.go:90] GET /healthz: (1.33321ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:25.220658  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.148972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.220831  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 09:20:25.232060  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.232094  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.232138  110351 httplog.go:90] GET /healthz: (1.0235ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.238370  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (987.691µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.259991  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.864943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.260223  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 09:20:25.279854  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.375675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.299748  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.295372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.299969  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 09:20:25.319081  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.608687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.319955  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.320221  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.320539  110351 httplog.go:90] GET /healthz: (2.39137ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:25.333442  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.333747  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.334476  110351 httplog.go:90] GET /healthz: (3.200434ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.340549  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.265273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.340802  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 09:20:25.362019  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (3.018478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.379562  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.083624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.379883  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 09:20:25.398700  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.114871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.422126  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.422165  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.734116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.422186  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.422221  110351 httplog.go:90] GET /healthz: (3.874224ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:25.422487  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 09:20:25.432203  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.432240  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.432289  110351 httplog.go:90] GET /healthz: (1.190872ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.438271  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (938.352µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.459553  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.084916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.459815  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 09:20:25.478840  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.313773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.499691  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.261229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.501113  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 09:20:25.519216  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.771997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.519918  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.520055  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.520203  110351 httplog.go:90] GET /healthz: (1.992772ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:25.532158  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.532203  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.532257  110351 httplog.go:90] GET /healthz: (1.103535ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.539621  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.147813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.540003  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 09:20:25.558649  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.199316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.579332  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.893381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.579575  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 09:20:25.599034  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.43808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.619398  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.619660  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.619913  110351 httplog.go:90] GET /healthz: (1.89451ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:25.620137  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.596433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.620376  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 09:20:25.632890  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.633132  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.633301  110351 httplog.go:90] GET /healthz: (1.562353ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.638630  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.134265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.659831  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.368065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.660076  110351 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 09:20:25.678785  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.392605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.680381  110351 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.183825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.699869  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.375297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.700347  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 09:20:25.718852  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.718904  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.718940  110351 httplog.go:90] GET /healthz: (913.933µs) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:25.719003  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.571719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.720724  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.318706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.732193  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.732365  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.732502  110351 httplog.go:90] GET /healthz: (1.281281ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.739495  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.0296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.739963  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 09:20:25.758805  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.313042ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.761000  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.600307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.779893  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.502678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.780107  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 09:20:25.798910  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.469469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.800516  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.226166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.820019  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.820061  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.820132  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.641945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.820176  110351 httplog.go:90] GET /healthz: (1.651804ms) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:25.820784  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 09:20:25.832211  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.832242  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.832277  110351 httplog.go:90] GET /healthz: (1.157373ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.838449  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.092507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.840326  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.090346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.859695  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.268212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.860472  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 09:20:25.878602  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.201131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.880532  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.150855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.899526  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.088521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.899985  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 09:20:25.919056  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.919114  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.919176  110351 httplog.go:90] GET /healthz: (933.995µs) 0 [Go-http-client/1.1 127.0.0.1:47344]
I0814 09:20:25.919215  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.729184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:25.921053  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.376133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.932201  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:25.932237  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:25.932279  110351 httplog.go:90] GET /healthz: (1.134043ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.940149  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.740335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.940678  110351 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 09:20:25.959240  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.780792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.962233  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.275684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.979526  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.006267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:25.979784  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 09:20:25.998907  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.432404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.000555  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.287333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.019274  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:26.019324  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:26.019365  110351 httplog.go:90] GET /healthz: (1.292384ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:26.019566  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.121327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.020113  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 09:20:26.032364  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:26.032559  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:26.032856  110351 httplog.go:90] GET /healthz: (1.702929ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.039099  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.710966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.041059  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.402145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.059846  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.412431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.060223  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 09:20:26.079223  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.731133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.081116  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.246285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.100451  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.965635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.101109  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 09:20:26.118931  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:26.118968  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:26.119035  110351 httplog.go:90] GET /healthz: (1.001651ms) 0 [Go-http-client/1.1 127.0.0.1:47590]
I0814 09:20:26.119049  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.58281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.121857  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.314887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.132077  110351 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 09:20:26.132118  110351 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 09:20:26.132160  110351 httplog.go:90] GET /healthz: (1.041062ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.139197  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.812924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.139755  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 09:20:26.158899  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.35169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.160864  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.538652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.180028  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.504747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.180397  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 09:20:26.200401  110351 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.452037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.203839  110351 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.897246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.219244  110351 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.713253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.219502  110351 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 09:20:26.219657  110351 httplog.go:90] GET /healthz: (1.154342ms) 200 [Go-http-client/1.1 127.0.0.1:47590]
W0814 09:20:26.220310  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220343  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220359  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220368  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220381  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220389  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220409  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220421  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220439  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220492  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 09:20:26.220502  110351 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 09:20:26.220521  110351 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 09:20:26.220530  110351 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 09:20:26.220994  110351 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221017  110351 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221080  110351 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221099  110351 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221328  110351 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221341  110351 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221514  110351 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221530  110351 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221747  110351 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221760  110351 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221891  110351 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.221903  110351 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.222753  110351 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.222835  110351 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223130  110351 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223150  110351 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.222265  110351 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223386  110351 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223555  110351 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223595  110351 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223670  110351 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.223713  110351 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 09:20:26.224877  110351 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (584.316µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:20:26.225027  110351 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (699.974µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:26.225242  110351 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (411.081µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0814 09:20:26.225566  110351 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (429.854µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47652]
I0814 09:20:26.225848  110351 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (501.793µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0814 09:20:26.225962  110351 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (444.384µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:26.226305  110351 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=28601 labels= fields= timeout=7m39s
I0814 09:20:26.226921  110351 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=28603 labels= fields= timeout=5m25s
I0814 09:20:26.227181  110351 get.go:250] Starting watch for /api/v1/services, rv=28601 labels= fields= timeout=7m49s
I0814 09:20:26.227503  110351 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28603 labels= fields= timeout=7m4s
I0814 09:20:26.227729  110351 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (417.231µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47644]
I0814 09:20:26.227883  110351 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (424.187µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0814 09:20:26.228179  110351 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (447.654µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0814 09:20:26.228524  110351 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=28603 labels= fields= timeout=5m24s
I0814 09:20:26.228526  110351 get.go:250] Starting watch for /api/v1/nodes, rv=28601 labels= fields= timeout=9m54s
I0814 09:20:26.228643  110351 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (379.569µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0814 09:20:26.229351  110351 get.go:250] Starting watch for /api/v1/pods, rv=28601 labels= fields= timeout=7m21s
I0814 09:20:26.229308  110351 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=28603 labels= fields= timeout=6m36s
I0814 09:20:26.230183  110351 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28603 labels= fields= timeout=6m36s
I0814 09:20:26.230251  110351 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=28602 labels= fields= timeout=7m14s
I0814 09:20:26.230698  110351 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (4.764172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0814 09:20:26.232284  110351 httplog.go:90] GET /healthz: (936.968µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.232513  110351 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=28601 labels= fields= timeout=8m49s
I0814 09:20:26.233808  110351 httplog.go:90] GET /api/v1/namespaces/default: (1.125846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.235558  110351 httplog.go:90] POST /api/v1/namespaces: (1.32458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.237179  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.020732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.240869  110351 httplog.go:90] POST /api/v1/namespaces/default/services: (3.283233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.242430  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.089686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.244176  110351 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.277295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:26.320961  110351 shared_informer.go:211] caches populated
I0814 09:20:26.421156  110351 shared_informer.go:211] caches populated
I0814 09:20:26.521397  110351 shared_informer.go:211] caches populated
I0814 09:20:26.621600  110351 shared_informer.go:211] caches populated
I0814 09:20:26.721835  110351 shared_informer.go:211] caches populated
I0814 09:20:26.822033  110351 shared_informer.go:211] caches populated
I0814 09:20:26.922244  110351 shared_informer.go:211] caches populated
I0814 09:20:27.022436  110351 shared_informer.go:211] caches populated
I0814 09:20:27.122657  110351 shared_informer.go:211] caches populated
I0814 09:20:27.222940  110351 shared_informer.go:211] caches populated
I0814 09:20:27.225577  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.226595  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.226998  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.228328  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.228959  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.229668  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.231142  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:27.323116  110351 shared_informer.go:211] caches populated
I0814 09:20:27.423261  110351 shared_informer.go:211] caches populated
I0814 09:20:27.427014  110351 httplog.go:90] POST /api/v1/nodes: (3.214383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:27.427375  110351 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 09:20:27.430281  110351 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods: (2.498413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:27.430661  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/waiting-pod
I0814 09:20:27.430676  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/waiting-pod
I0814 09:20:27.430812  110351 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/waiting-pod", node "test-node-0"
I0814 09:20:27.430829  110351 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 09:20:27.430891  110351 framework.go:558] waiting for 30s for pod "waiting-pod" at permit
I0814 09:20:27.434312  110351 factory.go:615] Attempting to bind signalling-pod to test-node-1
I0814 09:20:27.434682  110351 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 09:20:27.435739  110351 scheduler.go:447] Failed to bind pod: permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod
E0814 09:20:27.435754  110351 scheduler.go:449] scheduler cache ForgetPod failed: pod 9550561e-fdfd-4cc9-bd3f-9e919b1f2940 wasn't assumed so cannot be forgotten
E0814 09:20:27.435772  110351 scheduler.go:605] error binding pod: Post http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod/binding: dial tcp 127.0.0.1:35833: connect: connection refused
E0814 09:20:27.435796  110351 factory.go:566] Error scheduling permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod: Post http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod/binding: dial tcp 127.0.0.1:35833: connect: connection refused; retrying
I0814 09:20:27.435821  110351 factory.go:624] Updating pod condition for permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 09:20:27.436467  110351 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35833/apis/events.k8s.io/v1beta1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/events: dial tcp 127.0.0.1:35833: connect: connection refused' (may retry after sleeping)
E0814 09:20:27.436552  110351 scheduler.go:280] Error updating the condition of the pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod: Put http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod/status: dial tcp 127.0.0.1:35833: connect: connection refused
E0814 09:20:27.436623  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:27.436996  110351 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/waiting-pod/binding: (2.107556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:27.437157  110351 scheduler.go:614] pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 09:20:27.439445  110351 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/events: (1.969733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
E0814 09:20:27.637242  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
E0814 09:20:28.037851  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:28.225797  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:28.226743  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:28.227149  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:28.228456  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:28.229095  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:28.229767  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:28.231281  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:28.838373  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:29.226670  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:29.228058  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:29.228074  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:29.228667  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:29.230535  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:29.230571  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:29.231837  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.227333  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.228257  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.228303  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.228776  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.230672  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.230820  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:30.231959  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:30.439038  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:31.227525  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:31.228369  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:31.228450  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:31.228933  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:31.230878  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:31.230979  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:31.232089  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.227658  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.228652  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.228665  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.229064  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.231054  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.231120  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:32.232219  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.227827  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.228806  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.228887  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.229186  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.231214  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.231238  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:33.232412  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:33.639746  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:34.227989  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:34.228987  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:34.229045  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:34.229364  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:34.231402  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:34.231409  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:34.232595  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.228208  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.229317  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.229506  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.229533  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.231496  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.231550  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:35.232735  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.228304  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.229449  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.229633  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.229732  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.231668  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.231670  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.233461  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:36.234289  110351 httplog.go:90] GET /api/v1/namespaces/default: (1.349877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:36.235559  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (911.719µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:36.236837  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (951.65µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:37.228492  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:37.229575  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:37.229796  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:37.229966  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:37.232276  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:37.232315  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:37.233731  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.228829  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.229989  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.230088  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.230150  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.232787  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.232816  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:38.233854  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:38.513953  110351 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35833/apis/events.k8s.io/v1beta1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/events: dial tcp 127.0.0.1:35833: connect: connection refused' (may retry after sleeping)
I0814 09:20:39.228997  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:39.230661  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:39.230899  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:39.230995  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:39.232959  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:39.233025  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:39.233989  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:40.040313  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:40.229182  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:40.231135  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:40.231222  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:40.231295  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:40.233107  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:40.233129  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:40.235293  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.229377  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.231922  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.231922  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.231940  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.233239  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.233342  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:41.235509  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.229565  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.232765  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.232803  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.232876  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.233412  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.233512  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:42.235664  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.229745  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.233619  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.233942  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.234026  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.234757  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.234819  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:43.235799  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.229930  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.233781  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.234912  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.234934  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.235497  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.235598  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:44.236050  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.230171  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.233979  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.235071  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.235102  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.235778  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.235811  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:45.236206  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.230351  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.234101  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.234794  110351 httplog.go:90] GET /api/v1/namespaces/default: (1.630259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:46.235177  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.235202  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.235906  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.235925  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.236327  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:46.237285  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.896194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:46.238984  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.102798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:47.230559  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:47.234299  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:47.235334  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:47.235401  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:47.236024  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:47.236076  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:47.236463  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.230783  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.235149  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.235555  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.236172  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.236273  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.236611  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:48.237900  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.230959  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.235355  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.235702  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.236322  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.236431  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.236824  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:49.238045  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:49.783342  110351 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35833/apis/events.k8s.io/v1beta1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/events: dial tcp 127.0.0.1:35833: connect: connection refused' (may retry after sleeping)
I0814 09:20:50.231146  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:50.235562  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:50.235821  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:50.236492  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:50.236559  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:50.236991  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:50.238147  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.231359  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.235811  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.235966  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.236657  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.236755  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.237152  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:51.238347  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.231507  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.235943  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.236179  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.236825  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.236911  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.237367  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:52.238488  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:20:52.844637  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:20:53.231647  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:53.236238  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:53.236357  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:53.236959  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:53.237179  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:53.237515  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:53.238624  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.231840  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.236398  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.236471  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.237080  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.237349  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.237679  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:54.238814  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.232020  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.236528  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.236682  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.237219  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.237512  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.237778  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:55.238991  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.235915  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.236648  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.237702  110351 httplog.go:90] GET /api/v1/namespaces/default: (1.550497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:56.237852  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.241498  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.241506  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.241535  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.241609  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:56.242372  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.36325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:56.244528  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.7413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:57.236083  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.236863  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.238895  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.241742  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.241764  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.241766  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.241794  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:57.434232  110351 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods: (2.321387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:57.434909  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:57.434935  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:57.435068  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:20:57.435135  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:20:57.438951  110351 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod/status: (3.138674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:57.439782  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.708446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
E0814 09:20:57.440399  110351 factory.go:590] pod is already present in the activeQ
I0814 09:20:57.440862  110351 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/events: (4.693735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51670]
I0814 09:20:57.441792  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.262629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0814 09:20:57.442087  110351 generic_scheduler.go:1193] Node test-node-0 is a potential node for preemption.
I0814 09:20:57.444678  110351 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod/status: (1.880716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:57.447895  110351 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/waiting-pod: (2.727031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:57.448212  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:57.448228  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:57.448368  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:20:57.448406  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:20:57.451007  110351 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/events: (1.496158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51684]
I0814 09:20:57.451210  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.98806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:57.451262  110351 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/events: (2.807367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:57.451265  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.569838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51676]
I0814 09:20:57.539291  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.015502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:57.639223  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.117694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:57.739648  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.684843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:57.837879  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.690639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:57.940171  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (4.045714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:58.037617  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.53246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:58.137952  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.643696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:58.236187  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.236995  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.237896  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.717721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:58.239051  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.241882  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.241885  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.241919  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.241940  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:58.242052  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:58.242077  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:58.242226  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:20:58.242271  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:20:58.244203  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.245216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:58.245322  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.508012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.247239  110351 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/events/preemptor-pod.15babf92be976f57: (2.921304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51928]
I0814 09:20:58.337733  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.590658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.437935  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.850341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.537674  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.539721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.637805  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.65889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.737821  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.743211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.837923  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.762482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:58.939275  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.045054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:59.038828  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.62551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:59.137731  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.582018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:59.236732  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.237305  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.237922  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.731627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:59.239230  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.242029  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.242033  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.242038  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.242069  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:20:59.242125  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:59.242139  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:59.242276  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:20:59.242319  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:20:59.244076  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.338289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.244076  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.311782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
E0814 09:20:59.244344  110351 factory.go:590] pod is already present in the activeQ
I0814 09:20:59.244410  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:59.244426  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:20:59.244525  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:20:59.244601  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:20:59.246195  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.229839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:20:59.246487  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.499602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.337838  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.776468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.438079  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.90768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.537660  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.518128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.637989  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.911872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.737802  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.757667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.837406  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.338387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:20:59.937730  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.595086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.039047  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.818698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.137692  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.549812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.236973  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.237500  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.237651  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.546645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.239412  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.242147  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.242176  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.242268  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.242271  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:00.242662  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:00.242688  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:00.242810  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:00.242861  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:00.244976  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.389251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.245010  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.751734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:00.337903  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.693597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.437879  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.767624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.537561  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.428693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.637788  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.712907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.738198  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.097301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.837870  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.754612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:00.937819  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.627108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:01.037710  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.702976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:01.138110  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.907681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:01.237109  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.237697  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.238703  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.637701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:01.239640  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.242392  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.242423  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.242449  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.242430  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:01.242685  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:01.242708  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:01.242914  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:01.242985  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:01.244912  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.628565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:01.244995  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.657428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.338065  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.826892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.437689  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.548465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.538146  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.914911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.637981  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.828415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.738052  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.988328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.837673  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.565166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:01.937872  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.710855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
E0814 09:21:01.974143  110351 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35833/apis/events.k8s.io/v1beta1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/events: dial tcp 127.0.0.1:35833: connect: connection refused' (may retry after sleeping)
I0814 09:21:02.037794  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.673349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.138044  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.866242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.237826  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.237894  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.791141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.238313  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.239851  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.242578  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.242620  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.242669  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.242695  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:02.242763  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:02.242782  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:02.243216  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:02.243295  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:02.245766  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.546163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.245778  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.042544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
E0814 09:21:02.246041  110351 factory.go:590] pod is already present in the activeQ
I0814 09:21:02.246153  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:02.246170  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:02.246294  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:02.246331  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:02.254394  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (7.571805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:02.255160  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (8.28699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.338209  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.988213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.437933  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.756526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.538246  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.111293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.637453  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.370497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.737932  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.756597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.838144  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.959187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:02.941679  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.721073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.037758  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.728492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.137723  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.627363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.237916  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.76868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.238005  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.238436  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.240023  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.242730  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.242799  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.242748  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.242870  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:03.242912  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:03.242930  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:03.243083  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:03.243132  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:03.245056  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.564891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:03.245560  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.858929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.337791  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.696588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.437516  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.421672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.538096  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.906913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.637688  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.611339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.737772  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.733556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.837757  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.705556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:03.937848  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.778927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.037673  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.585939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.137643  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.512309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.237793  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.730015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.238173  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.238612  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.240190  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.242919  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.242948  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.243206  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.243277  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:04.338023  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.944408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.437706  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.570942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.537516  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.481359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.637311  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.32494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.737506  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.472564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.837515  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.505696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:04.937628  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.532579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:05.037977  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.915664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:05.137576  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.533724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:05.237494  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.42549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:05.238367  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.238807  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.240425  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.243031  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.243039  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.243164  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:05.243181  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:05.243273  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:05.243342  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:05.243398  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.243412  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:05.244951  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.416503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:05.244980  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.266682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.337710  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.609454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.437569  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.504661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.537794  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.696118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.637397  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.344025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.737701  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.585458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.837688  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.62585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:05.937779  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.654934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:06.037786  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.76159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:06.142044  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (5.989225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:06.237705  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.615572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:06.238339  110351 httplog.go:90] GET /api/v1/namespaces/default: (2.154784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.238559  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.238942  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.239885  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.187404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.240566  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.241712  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.324985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.243179  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.243202  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.243324  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:06.243335  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:06.243449  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:06.243487  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:06.243515  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.243562  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:06.245283  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.221856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:06.245354  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.234785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.337825  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.735568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.437600  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.502704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.537720  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.633582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.637778  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.681269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.737661  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.576968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.837521  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.441079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:06.937709  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.668011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:07.037909  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.800642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:07.137630  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.577569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:07.232281  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:07.232322  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:07.232723  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:07.232955  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:07.234927  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.501621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:07.234930  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.506013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.237135  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.100228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.238780  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.239051  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.240758  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.243333  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.243340  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.243638  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.243643  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:07.339524  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.440571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.437468  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.385854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.537628  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.624069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.637787  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.671664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.737877  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.788771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.837682  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.526004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:07.937528  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.390229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.037765  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.751034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.137863  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.703763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.237808  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.698677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.238963  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.239201  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.241101  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.243683  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.243846  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:08.243865  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:08.243947  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.243971  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.243998  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:08.244038  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:08.244057  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:08.245925  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.606616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:08.245925  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.662899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.338005  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.88704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.437752  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.666752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.537665  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.581152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.637671  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.548482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.738197  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.123929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.837821  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.778152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:08.937386  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.335958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:09.037764  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.684821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:09.137794  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.655188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:09.237768  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.629162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:09.239149  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.239335  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.241270  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.243870  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.244074  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.244136  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.244210  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:09.244573  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:09.244690  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:09.245073  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:09.245345  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:09.247224  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.440255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:09.247673  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.81317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.337955  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.877017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.442682  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (6.462982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.538336  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.190711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.637784  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.724326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.737544  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.347021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.837665  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.620311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:09.938032  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.961565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.038270  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.33944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.137751  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.634932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.237854  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.77392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.239317  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.239658  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.241501  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.244066  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.245001  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.245042  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.245055  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:10.337899  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.819419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.437427  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.324176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.537600  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.529122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.637934  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.839266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.739356  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.41926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.837628  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.472422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:10.937429  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.34445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:11.037887  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.892848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:11.138787  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.706858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:11.237469  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.342984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:11.239489  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.239817  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.241671  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.244243  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.244369  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:11.244382  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:11.244502  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:11.244565  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:11.245150  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.245216  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.245247  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:11.247637  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.773093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.247874  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.077826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:11.337403  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.354616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.437324  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.216202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.537641  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.494746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.637425  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.353167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.737807  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.718266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.839574  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.511611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:11.937420  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.345584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:12.037330  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.41262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:12.137420  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.364565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:12.237335  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.230299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:12.239646  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.239918  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.241866  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.244698  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.244821  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:12.244834  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:12.244937  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:12.244990  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:12.245486  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.245717  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.245739  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:12.247473  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.145149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.247473  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.847745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:12.337908  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.840128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.437153  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.176587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.537674  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.554054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.638203  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.978494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.738235  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.197722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.837568  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.47797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:12.937526  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.447162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.037825  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.757892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
E0814 09:21:13.078034  110351 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35833/apis/events.k8s.io/v1beta1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/events: dial tcp 127.0.0.1:35833: connect: connection refused' (may retry after sleeping)
I0814 09:21:13.138162  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.036159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.239772  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.239990  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.494432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.240060  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.242045  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.245797  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.250888  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.250971  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.250984  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:13.251038  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:13.251048  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:13.251173  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:13.251213  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:13.255259  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.24549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.255641  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (4.086051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:13.337606  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.50691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.437426  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.319699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.537331  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.302179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.637755  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.745579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.738110  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.967482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.837534  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.458921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:13.937414  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.363471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.037472  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.496438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.137426  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.327344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.238035  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.511511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.239936  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.240200  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.242214  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.245954  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.251056  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.251178  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:14.251194  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:14.251314  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:14.251355  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:14.251688  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.251708  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:14.253755  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.767598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:14.254239  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.758451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.337228  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.17144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.438909  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.452442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.537753  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.671728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.637698  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.643025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.737547  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.52864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.837502  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.424605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:14.937488  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.379928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:15.037823  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.748442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:15.137415  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.376613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:15.237444  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.405405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:15.240106  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.240361  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.242381  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.246213  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.251271  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.251380  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:15.251397  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:15.251562  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:15.251671  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:15.251835  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.251860  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:15.253320  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.372925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:15.253329  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.377055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.337936  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.682558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.437431  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.363175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.537553  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.555659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.637522  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.415151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.737558  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.427285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.837435  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.322314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:15.937569  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.577163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.037290  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.3164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.137369  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.306349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.237244  110351 httplog.go:90] GET /api/v1/namespaces/default: (1.035478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:16.237961  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.906834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.238876  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.081069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:16.240186  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (969.55µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:16.240222  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.240446  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.243442  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.246525  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.251432  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.251573  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:16.251651  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:16.251768  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:16.251805  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:16.252175  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.252201  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:16.254681  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.642217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:16.254872  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.720146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.337409  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.35014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.437557  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.43627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.537268  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.249605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.637410  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.285705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.737487  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.398632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.837728  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.651759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:16.937559  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.517236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:17.037553  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.548501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:17.137317  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.23347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:17.237822  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.762781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:17.240384  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.240612  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.243623  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.246746  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.251631  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.251751  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:17.251794  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:17.251920  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:17.251986  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:17.252312  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.252341  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:17.253869  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.616585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:17.253922  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.537105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.337934  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.820786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.437800  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.768514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.538053  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.939632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.637965  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.894873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.737844  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.75537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.837482  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.436542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:17.938118  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.764308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:18.037962  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.82451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:18.137958  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.784016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:18.237690  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.484465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:18.240566  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.240756  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.243783  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.246932  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.251894  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.252048  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:18.252065  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:18.252197  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:18.252260  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:18.253144  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.253161  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:18.255477  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.9129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:18.255705  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (3.008083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:18.337905  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.820611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:18.437756  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.613836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
E0814 09:21:18.445132  110351 factory.go:599] Error getting pod permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/signalling-pod for retry: Get http://127.0.0.1:35833/api/v1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/pods/signalling-pod: dial tcp 127.0.0.1:35833: connect: connection refused; retrying...
I0814 09:21:18.537695  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.6896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:18.637917  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.819824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:18.737993  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.833524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:18.837460  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.421064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:18.937687  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.579167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:19.037461  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.473232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:19.137480  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.38113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:19.237764  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.650986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:19.240818  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.240885  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.243960  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.247100  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.252072  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.252198  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:19.252209  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:19.252340  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:19.252377  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:19.253284  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.253448  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:19.254146  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.338216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:19.254153  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.436489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.337780  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.692261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.437565  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.460975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.537807  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.624905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.637549  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.547845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.739312  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.602718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.837483  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.441341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:19.937444  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.377918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.037675  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.649239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.137747  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.672948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.237735  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.664447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.240977  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.241130  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.244115  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.248226  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.252262  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.252375  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:20.252388  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:20.252522  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:20.252564  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:20.253988  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.255111  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:20.255831  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.094918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.256207  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.988391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:20.337778  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.686341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.437433  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.37903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.538192  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.177136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.637532  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.474433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.737675  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.596132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.837503  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.418155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:20.937466  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.400074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.037668  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.607346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.137520  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.482518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.237678  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.406476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.241095  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.241277  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.244229  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.248386  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.252448  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.252539  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:21.252548  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:21.252686  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:21.252718  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:21.254138  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.254576  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.116856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.254894  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.840614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:21.255289  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:21.337473  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.417004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.438017  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.660708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.537902  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.82268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.637702  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.615098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.738064  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.949264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.837962  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.934857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:21.937814  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.744725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.037783  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.715145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.137783  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.643588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.238190  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.128679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.241231  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.241468  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.244406  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.248496  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.252547  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.252666  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:22.252675  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:22.252785  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:22.252821  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:22.254277  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.254564  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.440797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:22.254980  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.953618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.255388  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:22.337398  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.344414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.437701  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.605408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.537572  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.551287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.637908  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.852549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.737864  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.809024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.837818  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.732328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.938038  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.938107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.942665  110351 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.311305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.944091  110351 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.072217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:22.945394  110351 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (851.719µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:23.037713  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.596873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:23.137314  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.249613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:23.237716  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.627929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:23.241353  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.241559  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.244547  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.248857  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.252709  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.252825  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:23.252852  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:23.252979  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:23.253040  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:23.254934  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.255832  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.438598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.256118  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.742565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:23.256326  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:23.337667  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.547389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.437743  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.68919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.537513  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.393322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.637407  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.301026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.737568  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.54396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.837557  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.523749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:23.937689  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.620926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:24.037751  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.623904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:24.137576  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.520063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:24.237865  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.708615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:24.241665  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:24.241850  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:24.244753  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:24.248995  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:24.252882  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:24.253016  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:24.253037  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:24.253170  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:24.253217  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:24.254940  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.493248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:24.254976  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.404993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.255174  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:24.256475  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 09:21:24.257821  110351 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:35833/apis/events.k8s.io/v1beta1/namespaces/permit-plugina2c91046-abd2-489d-bfec-1b9e4562f171/events: dial tcp 127.0.0.1:35833: connect: connection refused' (may retry after sleeping)
I0814 09:21:24.337646  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.524888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.437484  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.408347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.537868  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.790215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.637669  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.579898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.737453  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.384434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.837530  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.508726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:24.937409  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.340707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:25.037652  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.550357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:25.137762  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.648884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:25.237701  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.591397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:25.241812  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.242006  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.244932  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.249176  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.253065  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.253189  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:25.253202  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:25.253330  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:25.253381  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:25.255061  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.394874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:25.255283  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.469405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.255340  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.256656  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:25.337494  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.415989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.437706  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.632253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.537722  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.635813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.637827  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.713043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.737804  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.704864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.837744  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.635993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:25.937919  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.734954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.037951  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.751482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.138328  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.229893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.237495  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.446326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.237558  110351 httplog.go:90] GET /api/v1/namespaces/default: (1.158851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:26.239061  110351 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.003583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:26.240352  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (892.857µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:26.241936  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.242159  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.245266  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.249375  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.253209  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.253393  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:26.253444  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:26.253650  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:26.253739  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:26.255518  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.144441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:26.255527  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.449426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.255714  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.256795  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:26.338103  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.871413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.438065  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.951563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.537677  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.549911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.637499  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.507037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.737358  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.35443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.838052  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.961835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:26.938119  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.032536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.038001  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.905355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.137516  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.400668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.237578  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.539441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.242014  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.242357  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.245509  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.249598  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.253430  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.253574  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:27.253606  110351 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:27.253726  110351 factory.go:550] Unable to schedule preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 09:21:27.253771  110351 factory.go:624] Updating pod condition for preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 09:21:27.255376  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.35396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.255380  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.331549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.256159  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.257051  110351 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 09:21:27.337837  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.723094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.438816  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (2.724993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.440570  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (1.29179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.442259  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/waiting-pod: (1.203009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.448567  110351 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/waiting-pod: (5.717216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.452242  110351 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:27.452269  110351 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/preemptor-pod
I0814 09:21:27.453766  110351 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (4.821869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51672]
I0814 09:21:27.454010  110351 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/events: (1.470983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.457720  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/waiting-pod: (2.408959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.461005  110351 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/pods/preemptor-pod: (941.802µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
E0814 09:21:27.461477  110351 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 09:21:27.461892  110351 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28603&timeout=6m36s&timeoutSeconds=396&watch=true: (1m1.232839736s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0814 09:21:27.461922  110351 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28603&timeout=6m36s&timeoutSeconds=396&watch=true: (1m1.232297549s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47344]
I0814 09:21:27.461960  110351 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28603&timeout=7m4s&timeoutSeconds=424&watch=true: (1m1.234743215s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47658]
I0814 09:21:27.462001  110351 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28603&timeout=5m24s&timeoutSeconds=324&watch=true: (1m1.233738282s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47642]
I0814 09:21:27.462072  110351 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=28602&timeout=7m14s&timeoutSeconds=434&watch=true: (1m1.232091976s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47650]
I0814 09:21:27.462113  110351 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28601&timeout=9m54s&timeoutSeconds=594&watch=true: (1m1.233859582s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47646]
I0814 09:21:27.462078  110351 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=28601&timeout=7m49s&timeoutSeconds=469&watch=true: (1m1.235171916s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47652]
I0814 09:21:27.462118  110351 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28601&timeout=7m39s&timeoutSeconds=459&watch=true: (1m1.23629843s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47654]
I0814 09:21:27.462296  110351 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=28603&timeout=5m25s&timeoutSeconds=325&watch=true: (1m1.235648959s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47656]
I0814 09:21:27.462300  110351 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28601&timeout=7m21s&timeoutSeconds=441&watch=true: (1m1.233219102s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0814 09:21:27.462630  110351 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28601&timeout=8m49s&timeoutSeconds=529&watch=true: (1m1.231338312s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0814 09:21:27.466219  110351 httplog.go:90] DELETE /api/v1/nodes: (4.064455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.466383  110351 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 09:21:27.467466  110351 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (878.908µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
I0814 09:21:27.469335  110351 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.54075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51682]
--- FAIL: TestPreemptWithPermitPlugin (64.74s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-091316.xml

Find preempt-with-permit-plugine6983a27-f2bd-472a-b265-c65f5dba20fe/waiting-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 693 lines ...
W0814 09:08:05.890] I0814 09:08:05.889151   53035 controllermanager.go:535] Started "daemonset"
W0814 09:08:05.890] I0814 09:08:05.889192   53035 daemon_controller.go:267] Starting daemon sets controller
W0814 09:08:05.890] I0814 09:08:05.889429   53035 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
W0814 09:08:05.890] I0814 09:08:05.889789   53035 controllermanager.go:535] Started "deployment"
W0814 09:08:05.890] I0814 09:08:05.889942   53035 deployment_controller.go:152] Starting deployment controller
W0814 09:08:05.891] I0814 09:08:05.889988   53035 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0814 09:08:05.891] E0814 09:08:05.890203   53035 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 09:08:05.891] W0814 09:08:05.890222   53035 controllermanager.go:527] Skipping "service"
W0814 09:08:05.891] I0814 09:08:05.890650   53035 controllermanager.go:535] Started "replicationcontroller"
W0814 09:08:05.891] I0814 09:08:05.890877   53035 replica_set.go:182] Starting replicationcontroller controller
W0814 09:08:05.891] I0814 09:08:05.891019   53035 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
I0814 09:08:06.069] +++ [0814 09:08:06] On try 2, controller-manager: ok
W0814 09:08:06.196] I0814 09:08:06.195934   53035 garbagecollector.go:129] Starting garbage collector controller
... skipping 17 lines ...
W0814 09:08:06.213] I0814 09:08:06.212701   53035 controllermanager.go:535] Started "pv-protection"
W0814 09:08:06.213] I0814 09:08:06.212808   53035 pv_protection_controller.go:82] Starting PV protection controller
W0814 09:08:06.213] I0814 09:08:06.212962   53035 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
W0814 09:08:06.214] I0814 09:08:06.213811   53035 controllermanager.go:535] Started "serviceaccount"
W0814 09:08:06.214] I0814 09:08:06.214082   53035 serviceaccounts_controller.go:117] Starting service account controller
W0814 09:08:06.214] I0814 09:08:06.214239   53035 node_lifecycle_controller.go:77] Sending events to api server
W0814 09:08:06.215] E0814 09:08:06.214340   53035 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 09:08:06.215] W0814 09:08:06.214365   53035 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 09:08:06.215] W0814 09:08:06.214386   53035 controllermanager.go:527] Skipping "root-ca-cert-publisher"
W0814 09:08:06.215] W0814 09:08:06.214411   53035 controllermanager.go:527] Skipping "ttl-after-finished"
W0814 09:08:06.216] I0814 09:08:06.214252   53035 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0814 09:08:06.216] I0814 09:08:06.215087   53035 controllermanager.go:535] Started "endpoint"
W0814 09:08:06.216] I0814 09:08:06.215438   53035 endpoints_controller.go:170] Starting endpoint controller
... skipping 66 lines ...
W0814 09:08:06.464] I0814 09:08:06.392797   53035 node_lifecycle_controller.go:418] Controller will reconcile labels.
W0814 09:08:06.464] I0814 09:08:06.392854   53035 node_lifecycle_controller.go:431] Controller will taint node by condition.
W0814 09:08:06.464] I0814 09:08:06.392890   53035 controllermanager.go:535] Started "nodelifecycle"
W0814 09:08:06.464] I0814 09:08:06.393458   53035 node_lifecycle_controller.go:455] Starting node controller
W0814 09:08:06.464] I0814 09:08:06.393482   53035 controller_utils.go:1029] Waiting for caches to sync for taint controller
W0814 09:08:06.464] I0814 09:08:06.419972   53035 controller_utils.go:1036] Caches are synced for certificate controller
W0814 09:08:06.465] W0814 09:08:06.446870   53035 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 09:08:06.491] I0814 09:08:06.491009   53035 controller_utils.go:1036] Caches are synced for TTL controller
W0814 09:08:06.517] I0814 09:08:06.516548   53035 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 09:08:06.536] E0814 09:08:06.535299   53035 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0814 09:08:06.685] I0814 09:08:06.684433   53035 controller_utils.go:1036] Caches are synced for namespace controller
W0814 09:08:06.729] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0814 09:08:06.816] I0814 09:08:06.815861   53035 controller_utils.go:1036] Caches are synced for endpoint controller
W0814 09:08:06.818] I0814 09:08:06.817744   53035 controller_utils.go:1036] Caches are synced for stateful set controller
W0814 09:08:06.818] I0814 09:08:06.817766   53035 controller_utils.go:1036] Caches are synced for job controller
W0814 09:08:06.889] I0814 09:08:06.888953   53035 controller_utils.go:1036] Caches are synced for GC controller
... skipping 103 lines ...
I0814 09:08:10.691] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:08:10.694] +++ command: run_RESTMapper_evaluation_tests
I0814 09:08:10.708] +++ [0814 09:08:10] Creating namespace namespace-1565773690-8298
I0814 09:08:10.789] namespace/namespace-1565773690-8298 created
I0814 09:08:10.865] Context "test" modified.
I0814 09:08:10.873] +++ [0814 09:08:10] Testing RESTMapper
I0814 09:08:10.989] +++ [0814 09:08:10] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 09:08:11.003] +++ exit code: 0
I0814 09:08:11.129] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 09:08:11.130] bindings                                                                      true         Binding
I0814 09:08:11.130] componentstatuses                 cs                                          false        ComponentStatus
I0814 09:08:11.130] configmaps                        cm                                          true         ConfigMap
I0814 09:08:11.131] endpoints                         ep                                          true         Endpoints
... skipping 664 lines ...
I0814 09:08:30.917] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 09:08:31.017] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 09:08:31.086] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 09:08:31.187] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 09:08:31.331] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:08:31.514] (Bpod/env-test-pod created
W0814 09:08:31.615] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 09:08:31.615] error: setting 'all' parameter but found a non empty selector. 
W0814 09:08:31.616] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 09:08:31.616] I0814 09:08:30.603349   49603 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0814 09:08:31.617] error: min-available and max-unavailable cannot be both specified
I0814 09:08:31.717] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 09:08:31.718] Name:         env-test-pod
I0814 09:08:31.718] Namespace:    test-kubectl-describe-pod
I0814 09:08:31.719] Priority:     0
I0814 09:08:31.719] Node:         <none>
I0814 09:08:31.719] Labels:       <none>
... skipping 173 lines ...
I0814 09:08:45.222] (Bpod/valid-pod patched
I0814 09:08:45.320] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 09:08:45.395] (Bpod/valid-pod patched
I0814 09:08:45.490] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 09:08:45.652] (Bpod/valid-pod patched
I0814 09:08:45.755] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 09:08:45.930] (B+++ [0814 09:08:45] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 09:08:46.174] pod "valid-pod" deleted
I0814 09:08:46.185] pod/valid-pod replaced
I0814 09:08:46.287] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 09:08:46.452] (BSuccessful
I0814 09:08:46.453] message:error: --grace-period must have --force specified
I0814 09:08:46.453] has:\-\-grace-period must have \-\-force specified
I0814 09:08:46.634] Successful
I0814 09:08:46.634] message:error: --timeout must have --force specified
I0814 09:08:46.634] has:\-\-timeout must have \-\-force specified
I0814 09:08:46.782] node/node-v1-test created
W0814 09:08:46.883] W0814 09:08:46.782198   53035 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
W0814 09:08:46.897] I0814 09:08:46.896848   53035 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"c59e01b1-6a0b-4428-b3c5-3cae801fbb8b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
I0814 09:08:46.998] node/node-v1-test replaced
I0814 09:08:47.049] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 09:08:47.130] (Bnode "node-v1-test" deleted
I0814 09:08:47.235] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 09:08:47.526] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 36 lines ...
I0814 09:08:49.963] (Bpod/redis-master created
I0814 09:08:49.966] pod/valid-pod created
W0814 09:08:50.067] Edit cancelled, no changes made.
W0814 09:08:50.067] Edit cancelled, no changes made.
W0814 09:08:50.067] Edit cancelled, no changes made.
W0814 09:08:50.067] Edit cancelled, no changes made.
W0814 09:08:50.067] error: 'name' already has a value (valid-pod), and --overwrite is false
W0814 09:08:50.067] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 09:08:50.168] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0814 09:08:50.173] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0814 09:08:50.250] (Bpod "redis-master" deleted
I0814 09:08:50.261] pod "valid-pod" deleted
I0814 09:08:50.362] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 73 lines ...
I0814 09:08:56.694] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 09:08:56.697] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:08:56.699] +++ command: run_kubectl_create_error_tests
I0814 09:08:56.712] +++ [0814 09:08:56] Creating namespace namespace-1565773736-4173
I0814 09:08:56.795] namespace/namespace-1565773736-4173 created
I0814 09:08:56.881] Context "test" modified.
I0814 09:08:56.889] +++ [0814 09:08:56] Testing kubectl create with error
W0814 09:08:56.990] Error: must specify one of -f and -k
W0814 09:08:56.990] 
W0814 09:08:56.990] Create a resource from a file or from stdin.
W0814 09:08:56.990] 
W0814 09:08:56.990]  JSON and YAML formats are accepted.
W0814 09:08:56.990] 
W0814 09:08:56.991] Examples:
... skipping 41 lines ...
W0814 09:08:56.996] 
W0814 09:08:56.996] Usage:
W0814 09:08:56.996]   kubectl create -f FILENAME [options]
W0814 09:08:56.996] 
W0814 09:08:56.996] Use "kubectl <command> --help" for more information about a given command.
W0814 09:08:56.996] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 09:08:57.131] +++ [0814 09:08:57] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 09:08:57.232] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 09:08:57.233] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 09:08:57.333] +++ exit code: 0
I0814 09:08:57.359] Recording: run_kubectl_apply_tests
I0814 09:08:57.359] Running command: run_kubectl_apply_tests
I0814 09:08:57.383] 
... skipping 19 lines ...
W0814 09:08:59.598] I0814 09:08:59.597444   49603 client.go:354] parsed scheme: ""
W0814 09:08:59.599] I0814 09:08:59.597476   49603 client.go:354] scheme "" not registered, fallback to default scheme
W0814 09:08:59.599] I0814 09:08:59.597516   49603 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 09:08:59.600] I0814 09:08:59.597564   49603 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 09:08:59.600] I0814 09:08:59.598190   49603 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 09:08:59.601] I0814 09:08:59.600600   49603 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0814 09:08:59.689] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 09:08:59.790] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0814 09:08:59.790] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 09:08:59.804] +++ exit code: 0
I0814 09:08:59.842] Recording: run_kubectl_run_tests
I0814 09:08:59.842] Running command: run_kubectl_run_tests
I0814 09:08:59.864] 
... skipping 97 lines ...
I0814 09:09:02.510] Context "test" modified.
I0814 09:09:02.519] +++ [0814 09:09:02] Testing kubectl create filter
I0814 09:09:02.614] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:02.814] (Bpod/selector-test-pod created
I0814 09:09:02.917] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 09:09:03.006] (BSuccessful
I0814 09:09:03.006] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 09:09:03.007] has:pods "selector-test-pod-dont-apply" not found
I0814 09:09:03.086] pod "selector-test-pod" deleted
I0814 09:09:03.106] +++ exit code: 0
I0814 09:09:03.142] Recording: run_kubectl_apply_deployments_tests
I0814 09:09:03.142] Running command: run_kubectl_apply_deployments_tests
I0814 09:09:03.165] 
... skipping 18 lines ...
I0814 09:09:04.451] apps.sh:130: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
I0814 09:09:04.542] (Bapps.sh:131: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
I0814 09:09:04.639] (Bapps.sh:132: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
I0814 09:09:04.729] (Bdeployment.apps "my-depl" deleted
I0814 09:09:04.738] replicaset.apps "my-depl-67dc88cf84" deleted
I0814 09:09:04.745] pod "my-depl-67dc88cf84-s4g5x" deleted
W0814 09:09:04.846] E0814 09:09:04.753764   53035 replica_set.go:450] Sync "namespace-1565773743-2315/my-depl-67dc88cf84" failed with replicasets.apps "my-depl-67dc88cf84" not found
I0814 09:09:04.947] apps.sh:138: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:04.964] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:05.063] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:05.159] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:05.326] (Bdeployment.apps/nginx created
W0814 09:09:05.427] I0814 09:09:05.330364   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565773743-2315", Name:"nginx", UID:"1e4c6d41-79ab-4d66-b33c-82a8399a3220", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 09:09:05.427] I0814 09:09:05.334508   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773743-2315", Name:"nginx-7dbc4d9f", UID:"472fdd39-7e57-4fe6-a345-9831998df464", APIVersion:"apps/v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-vch2f
W0814 09:09:05.428] I0814 09:09:05.337834   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773743-2315", Name:"nginx-7dbc4d9f", UID:"472fdd39-7e57-4fe6-a345-9831998df464", APIVersion:"apps/v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-tjfdq
W0814 09:09:05.428] I0814 09:09:05.339289   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773743-2315", Name:"nginx-7dbc4d9f", UID:"472fdd39-7e57-4fe6-a345-9831998df464", APIVersion:"apps/v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-ztptz
I0814 09:09:05.529] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 09:09:09.663] (BSuccessful
I0814 09:09:09.663] message:Error from server (Conflict): error when applying patch:
I0814 09:09:09.664] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565773743-2315\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 09:09:09.664] to:
I0814 09:09:09.664] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 09:09:09.664] Name: "nginx", Namespace: "namespace-1565773743-2315"
I0814 09:09:09.667] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565773743-2315\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T09:09:05Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T09:09:05Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T09:09:05Z"]] "name":"nginx" "namespace":"namespace-1565773743-2315" "resourceVersion":"590" "selfLink":"/apis/apps/v1/namespaces/namespace-1565773743-2315/deployments/nginx" "uid":"1e4c6d41-79ab-4d66-b33c-82a8399a3220"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T09:09:05Z" "lastUpdateTime":"2019-08-14T09:09:05Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T09:09:05Z" "lastUpdateTime":"2019-08-14T09:09:05Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 09:09:09.668] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 09:09:09.668] has:Error from server (Conflict)
W0814 09:09:11.102] I0814 09:09:11.101482   53035 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565773734-898
I0814 09:09:14.921] deployment.apps/nginx configured
W0814 09:09:15.021] I0814 09:09:14.925058   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565773743-2315", Name:"nginx", UID:"38506500-eb38-4ec6-a4c9-8f991da3ba2c", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 09:09:15.022] I0814 09:09:14.931306   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773743-2315", Name:"nginx-594f77b9f6", UID:"032dd49a-8089-4793-8430-ec66de310d68", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-g67vw
W0814 09:09:15.022] I0814 09:09:14.935889   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773743-2315", Name:"nginx-594f77b9f6", UID:"032dd49a-8089-4793-8430-ec66de310d68", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-bxrln
W0814 09:09:15.023] I0814 09:09:14.938547   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773743-2315", Name:"nginx-594f77b9f6", UID:"032dd49a-8089-4793-8430-ec66de310d68", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-w87nn
... skipping 169 lines ...
I0814 09:09:22.309] +++ [0814 09:09:22] Creating namespace namespace-1565773762-32602
I0814 09:09:22.386] namespace/namespace-1565773762-32602 created
I0814 09:09:22.459] Context "test" modified.
I0814 09:09:22.468] +++ [0814 09:09:22] Testing kubectl get
I0814 09:09:22.567] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:22.657] (BSuccessful
I0814 09:09:22.657] message:Error from server (NotFound): pods "abc" not found
I0814 09:09:22.658] has:pods "abc" not found
I0814 09:09:22.756] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:22.854] (BSuccessful
I0814 09:09:22.854] message:Error from server (NotFound): pods "abc" not found
I0814 09:09:22.855] has:pods "abc" not found
I0814 09:09:22.952] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:23.044] (BSuccessful
I0814 09:09:23.044] message:{
I0814 09:09:23.045]     "apiVersion": "v1",
I0814 09:09:23.045]     "items": [],
... skipping 23 lines ...
I0814 09:09:23.397] has not:No resources found
I0814 09:09:23.485] Successful
I0814 09:09:23.485] message:NAME
I0814 09:09:23.486] has not:No resources found
I0814 09:09:23.576] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:23.678] (BSuccessful
I0814 09:09:23.679] message:error: the server doesn't have a resource type "foobar"
I0814 09:09:23.679] has not:No resources found
I0814 09:09:23.764] Successful
I0814 09:09:23.764] message:No resources found in namespace-1565773762-32602 namespace.
I0814 09:09:23.764] has:No resources found
I0814 09:09:23.853] Successful
I0814 09:09:23.854] message:
I0814 09:09:23.854] has not:No resources found
I0814 09:09:23.939] Successful
I0814 09:09:23.940] message:No resources found in namespace-1565773762-32602 namespace.
I0814 09:09:23.940] has:No resources found
I0814 09:09:24.038] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:24.126] (BSuccessful
I0814 09:09:24.127] message:Error from server (NotFound): pods "abc" not found
I0814 09:09:24.127] has:pods "abc" not found
I0814 09:09:24.128] FAIL!
I0814 09:09:24.128] message:Error from server (NotFound): pods "abc" not found
I0814 09:09:24.128] has not:List
I0814 09:09:24.129] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 09:09:24.243] Successful
I0814 09:09:24.244] message:I0814 09:09:24.194004   63584 loader.go:375] Config loaded from file:  /tmp/tmp.ixtr2FkaWe/.kube/config
I0814 09:09:24.244] I0814 09:09:24.195453   63584 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 09:09:24.244] I0814 09:09:24.216814   63584 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 09:09:29.841] Successful
I0814 09:09:29.841] message:NAME    DATA   AGE
I0814 09:09:29.841] one     0      0s
I0814 09:09:29.841] three   0      0s
I0814 09:09:29.841] two     0      0s
I0814 09:09:29.841] STATUS    REASON          MESSAGE
I0814 09:09:29.842] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 09:09:29.842] has not:watch is only supported on individual resources
I0814 09:09:30.937] Successful
I0814 09:09:30.938] message:STATUS    REASON          MESSAGE
I0814 09:09:30.938] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 09:09:30.938] has not:watch is only supported on individual resources
I0814 09:09:30.945] +++ [0814 09:09:30] Creating namespace namespace-1565773770-15
I0814 09:09:31.019] namespace/namespace-1565773770-15 created
I0814 09:09:31.097] Context "test" modified.
I0814 09:09:31.200] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:31.353] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 09:09:31.451] }
I0814 09:09:31.536] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 09:09:31.783] (B<no value>Successful
I0814 09:09:31.783] message:valid-pod:
I0814 09:09:31.783] has:valid-pod:
I0814 09:09:31.869] Successful
I0814 09:09:31.869] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 09:09:31.869] 	template was:
I0814 09:09:31.869] 		{.missing}
I0814 09:09:31.869] 	object given to jsonpath engine was:
I0814 09:09:31.871] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T09:09:31Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T09:09:31Z"}}, "name":"valid-pod", "namespace":"namespace-1565773770-15", "resourceVersion":"692", "selfLink":"/api/v1/namespaces/namespace-1565773770-15/pods/valid-pod", "uid":"a80d3a8b-4385-412b-9f10-7eb79d16825b"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 09:09:31.871] has:missing is not found
I0814 09:09:31.954] Successful
I0814 09:09:31.958] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 09:09:31.958] 	template was:
I0814 09:09:31.958] 		{{.missing}}
I0814 09:09:31.958] 	raw data was:
I0814 09:09:31.959] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T09:09:31Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T09:09:31Z"}],"name":"valid-pod","namespace":"namespace-1565773770-15","resourceVersion":"692","selfLink":"/api/v1/namespaces/namespace-1565773770-15/pods/valid-pod","uid":"a80d3a8b-4385-412b-9f10-7eb79d16825b"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 09:09:31.959] 	object given to template engine was:
I0814 09:09:31.960] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T09:09:31Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T09:09:31Z]] name:valid-pod namespace:namespace-1565773770-15 resourceVersion:692 selfLink:/api/v1/namespaces/namespace-1565773770-15/pods/valid-pod uid:a80d3a8b-4385-412b-9f10-7eb79d16825b] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 09:09:31.960] has:map has no entry for key "missing"
W0814 09:09:32.061] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 09:09:33.034] Successful
I0814 09:09:33.035] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 09:09:33.035] valid-pod   0/1     Pending   0          1s
I0814 09:09:33.035] STATUS      REASON          MESSAGE
I0814 09:09:33.035] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 09:09:33.036] has:STATUS
I0814 09:09:33.036] Successful
I0814 09:09:33.036] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 09:09:33.036] valid-pod   0/1     Pending   0          1s
I0814 09:09:33.036] STATUS      REASON          MESSAGE
I0814 09:09:33.036] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 09:09:33.036] has:valid-pod
I0814 09:09:34.120] Successful
I0814 09:09:34.120] message:pod/valid-pod
I0814 09:09:34.120] has not:STATUS
I0814 09:09:34.123] Successful
I0814 09:09:34.123] message:pod/valid-pod
... skipping 144 lines ...
I0814 09:09:35.247] status:
I0814 09:09:35.247]   phase: Pending
I0814 09:09:35.247]   qosClass: Guaranteed
I0814 09:09:35.247] ---
I0814 09:09:35.247] has:name: valid-pod
I0814 09:09:35.306] Successful
I0814 09:09:35.306] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 09:09:35.306] has:"invalid-pod" not found
I0814 09:09:35.385] pod "valid-pod" deleted
I0814 09:09:35.481] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:09:35.654] (Bpod/redis-master created
I0814 09:09:35.657] pod/valid-pod created
I0814 09:09:35.758] Successful
... skipping 35 lines ...
I0814 09:09:36.958] +++ command: run_kubectl_exec_pod_tests
I0814 09:09:36.973] +++ [0814 09:09:36] Creating namespace namespace-1565773776-12369
I0814 09:09:37.084] namespace/namespace-1565773776-12369 created
I0814 09:09:37.182] Context "test" modified.
I0814 09:09:37.195] +++ [0814 09:09:37] Testing kubectl exec POD COMMAND
I0814 09:09:37.324] Successful
I0814 09:09:37.324] message:Error from server (NotFound): pods "abc" not found
I0814 09:09:37.325] has:pods "abc" not found
I0814 09:09:37.528] pod/test-pod created
I0814 09:09:37.666] Successful
I0814 09:09:37.667] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 09:09:37.667] has not:pods "test-pod" not found
I0814 09:09:37.670] Successful
I0814 09:09:37.670] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 09:09:37.670] has not:pod or type/name must be specified
I0814 09:09:37.774] pod "test-pod" deleted
I0814 09:09:37.799] +++ exit code: 0
I0814 09:09:37.835] Recording: run_kubectl_exec_resource_name_tests
I0814 09:09:37.836] Running command: run_kubectl_exec_resource_name_tests
I0814 09:09:37.862] 
... skipping 2 lines ...
I0814 09:09:37.871] +++ command: run_kubectl_exec_resource_name_tests
I0814 09:09:37.886] +++ [0814 09:09:37] Creating namespace namespace-1565773777-21539
I0814 09:09:37.969] namespace/namespace-1565773777-21539 created
I0814 09:09:38.043] Context "test" modified.
I0814 09:09:38.052] +++ [0814 09:09:38] Testing kubectl exec TYPE/NAME COMMAND
I0814 09:09:38.160] Successful
I0814 09:09:38.161] message:error: the server doesn't have a resource type "foo"
I0814 09:09:38.161] has:error:
I0814 09:09:38.245] Successful
I0814 09:09:38.246] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 09:09:38.246] has:"bar" not found
I0814 09:09:38.395] pod/test-pod created
I0814 09:09:38.558] replicaset.apps/frontend created
W0814 09:09:38.659] I0814 09:09:38.562848   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773777-21539", Name:"frontend", UID:"41914c40-00a7-429a-a6f0-53bb6c87a8c0", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-thhsl
W0814 09:09:38.660] I0814 09:09:38.566252   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773777-21539", Name:"frontend", UID:"41914c40-00a7-429a-a6f0-53bb6c87a8c0", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5c9k9
W0814 09:09:38.660] I0814 09:09:38.566555   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773777-21539", Name:"frontend", UID:"41914c40-00a7-429a-a6f0-53bb6c87a8c0", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7fcgq
I0814 09:09:38.761] configmap/test-set-env-config created
I0814 09:09:38.796] Successful
I0814 09:09:38.797] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 09:09:38.797] has:not implemented
I0814 09:09:38.879] Successful
I0814 09:09:38.880] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 09:09:38.880] has not:not found
I0814 09:09:38.881] Successful
I0814 09:09:38.881] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 09:09:38.881] has not:pod or type/name must be specified
I0814 09:09:38.978] Successful
I0814 09:09:38.978] message:Error from server (BadRequest): pod frontend-5c9k9 does not have a host assigned
I0814 09:09:38.979] has not:not found
I0814 09:09:38.980] Successful
I0814 09:09:38.981] message:Error from server (BadRequest): pod frontend-5c9k9 does not have a host assigned
I0814 09:09:38.981] has not:pod or type/name must be specified
I0814 09:09:39.058] pod "test-pod" deleted
I0814 09:09:39.135] replicaset.apps "frontend" deleted
I0814 09:09:39.223] configmap "test-set-env-config" deleted
I0814 09:09:39.244] +++ exit code: 0
I0814 09:09:39.278] Recording: run_create_secret_tests
I0814 09:09:39.279] Running command: run_create_secret_tests
I0814 09:09:39.297] 
I0814 09:09:39.300] +++ Running case: test-cmd.run_create_secret_tests 
I0814 09:09:39.302] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:09:39.304] +++ command: run_create_secret_tests
I0814 09:09:39.390] Successful
I0814 09:09:39.390] message:Error from server (NotFound): secrets "mysecret" not found
I0814 09:09:39.390] has:secrets "mysecret" not found
I0814 09:09:39.540] Successful
I0814 09:09:39.540] message:Error from server (NotFound): secrets "mysecret" not found
I0814 09:09:39.541] has:secrets "mysecret" not found
I0814 09:09:39.541] Successful
I0814 09:09:39.542] message:user-specified
I0814 09:09:39.542] has:user-specified
I0814 09:09:39.609] Successful
I0814 09:09:39.679] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"223cd225-dd51-4469-8e07-ed399a769a5d","resourceVersion":"766","creationTimestamp":"2019-08-14T09:09:39Z"}}
... skipping 2 lines ...
I0814 09:09:39.847] has:uid
I0814 09:09:39.919] Successful
I0814 09:09:39.920] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"223cd225-dd51-4469-8e07-ed399a769a5d","resourceVersion":"767","creationTimestamp":"2019-08-14T09:09:39Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T09:09:39Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 09:09:39.920] has:config1
I0814 09:09:39.988] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"223cd225-dd51-4469-8e07-ed399a769a5d"}}
I0814 09:09:40.082] Successful
I0814 09:09:40.083] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 09:09:40.083] has:configmaps "tester-update-cm" not found
I0814 09:09:40.096] +++ exit code: 0
I0814 09:09:40.130] Recording: run_kubectl_create_kustomization_directory_tests
I0814 09:09:40.131] Running command: run_kubectl_create_kustomization_directory_tests
I0814 09:09:40.152] 
I0814 09:09:40.154] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0814 09:09:42.733] valid-pod   0/1     Pending   0          0s
I0814 09:09:42.733] has:valid-pod
I0814 09:09:43.820] Successful
I0814 09:09:43.820] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 09:09:43.821] valid-pod   0/1     Pending   0          0s
I0814 09:09:43.821] STATUS      REASON          MESSAGE
I0814 09:09:43.821] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 09:09:43.822] has:Timeout exceeded while reading body
I0814 09:09:43.897] Successful
I0814 09:09:43.898] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 09:09:43.898] valid-pod   0/1     Pending   0          1s
I0814 09:09:43.899] has:valid-pod
I0814 09:09:43.966] Successful
I0814 09:09:43.967] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 09:09:43.967] has:Invalid timeout value
I0814 09:09:44.037] pod "valid-pod" deleted
I0814 09:09:44.056] +++ exit code: 0
I0814 09:09:44.086] Recording: run_crd_tests
I0814 09:09:44.087] Running command: run_crd_tests
I0814 09:09:44.106] 
... skipping 244 lines ...
I0814 09:09:48.581] foo.company.com/test patched
I0814 09:09:48.672] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0814 09:09:48.752] (Bfoo.company.com/test patched
I0814 09:09:48.841] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 09:09:48.918] (Bfoo.company.com/test patched
I0814 09:09:49.002] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 09:09:49.157] (B+++ [0814 09:09:49] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 09:09:49.216] {
I0814 09:09:49.217]     "apiVersion": "company.com/v1",
I0814 09:09:49.217]     "kind": "Foo",
I0814 09:09:49.217]     "metadata": {
I0814 09:09:49.217]         "annotations": {
I0814 09:09:49.217]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 353 lines ...
I0814 09:10:15.230] (Bnamespace/non-native-resources created
I0814 09:10:15.394] bar.company.com/test created
I0814 09:10:15.503] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 09:10:15.581] (Bnamespace "non-native-resources" deleted
I0814 09:10:20.785] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 09:10:20.955] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0814 09:10:21.056] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 09:10:21.157] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0814 09:10:21.164] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 09:10:21.276] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 09:10:21.315] +++ exit code: 0
I0814 09:10:21.356] Recording: run_cmd_with_img_tests
I0814 09:10:21.356] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0814 09:10:21.571] +++ [0814 09:10:21] Testing cmd with image
I0814 09:10:21.665] Successful
I0814 09:10:21.666] message:deployment.apps/test1 created
I0814 09:10:21.666] has:deployment.apps/test1 created
I0814 09:10:21.748] deployment.apps "test1" deleted
I0814 09:10:21.828] Successful
I0814 09:10:21.828] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 09:10:21.828] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 09:10:21.843] +++ exit code: 0
I0814 09:10:21.887] +++ [0814 09:10:21] Testing recursive resources
I0814 09:10:21.894] +++ [0814 09:10:21] Creating namespace namespace-1565773821-14479
I0814 09:10:21.972] namespace/namespace-1565773821-14479 created
I0814 09:10:22.045] Context "test" modified.
I0814 09:10:22.140] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:22.457] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:22.459] (BSuccessful
I0814 09:10:22.459] message:pod/busybox0 created
I0814 09:10:22.459] pod/busybox1 created
I0814 09:10:22.460] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 09:10:22.460] has:error validating data: kind not set
I0814 09:10:22.553] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:22.727] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 09:10:22.729] (BSuccessful
I0814 09:10:22.730] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:22.730] has:Object 'Kind' is missing
I0814 09:10:22.822] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:23.095] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 09:10:23.098] (BSuccessful
I0814 09:10:23.099] message:pod/busybox0 replaced
I0814 09:10:23.099] pod/busybox1 replaced
I0814 09:10:23.099] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 09:10:23.099] has:error validating data: kind not set
I0814 09:10:23.190] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:23.288] (BSuccessful
I0814 09:10:23.288] message:Name:         busybox0
I0814 09:10:23.288] Namespace:    namespace-1565773821-14479
I0814 09:10:23.288] Priority:     0
I0814 09:10:23.288] Node:         <none>
... skipping 159 lines ...
I0814 09:10:23.301] has:Object 'Kind' is missing
I0814 09:10:23.392] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:23.588] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 09:10:23.591] (BSuccessful
I0814 09:10:23.591] message:pod/busybox0 annotated
I0814 09:10:23.591] pod/busybox1 annotated
I0814 09:10:23.592] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:23.592] has:Object 'Kind' is missing
I0814 09:10:23.687] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:23.963] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 09:10:23.966] (BSuccessful
I0814 09:10:23.966] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 09:10:23.966] pod/busybox0 configured
I0814 09:10:23.966] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 09:10:23.966] pod/busybox1 configured
I0814 09:10:23.967] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 09:10:23.967] has:error validating data: kind not set
I0814 09:10:24.058] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:24.206] (Bdeployment.apps/nginx created
W0814 09:10:24.306] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 09:10:24.307] I0814 09:10:21.654288   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565773821-27581", Name:"test1", UID:"62b42bb0-ebc8-4615-9a58-1a7a6f1ebece", APIVersion:"apps/v1", ResourceVersion:"922", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 09:10:24.308] I0814 09:10:21.679731   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-27581", Name:"test1-9797f89d8", UID:"1158cf9f-ce76-444f-81ef-244ca886e2a4", APIVersion:"apps/v1", ResourceVersion:"923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-9gzbj
W0814 09:10:24.308] W0814 09:10:21.966575   49603 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 09:10:24.308] E0814 09:10:21.968394   53035 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.308] W0814 09:10:22.067488   49603 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 09:10:24.308] E0814 09:10:22.068875   53035 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.308] W0814 09:10:22.176537   49603 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 09:10:24.309] E0814 09:10:22.178239   53035 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.309] W0814 09:10:22.288765   49603 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 09:10:24.309] E0814 09:10:22.290259   53035 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.309] E0814 09:10:22.969960   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.309] E0814 09:10:23.070177   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.310] E0814 09:10:23.179471   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.310] E0814 09:10:23.291607   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.310] E0814 09:10:23.971390   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.310] E0814 09:10:24.072037   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.310] E0814 09:10:24.180525   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:24.311] I0814 09:10:24.211197   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565773821-14479", Name:"nginx", UID:"c87ce6b0-da67-418e-8572-eeae66c4ecd4", APIVersion:"apps/v1", ResourceVersion:"948", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 09:10:24.311] I0814 09:10:24.216250   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx-bbbbb95b5", UID:"9291db33-5ba7-445a-a1e0-5f8354785ffd", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-782cp
W0814 09:10:24.311] I0814 09:10:24.220108   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx-bbbbb95b5", UID:"9291db33-5ba7-445a-a1e0-5f8354785ffd", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-qwlnq
W0814 09:10:24.312] I0814 09:10:24.221113   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx-bbbbb95b5", UID:"9291db33-5ba7-445a-a1e0-5f8354785ffd", APIVersion:"apps/v1", ResourceVersion:"949", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-tpwsn
W0814 09:10:24.312] E0814 09:10:24.293544   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:24.412] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 09:10:24.419] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 09:10:24.593] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0814 09:10:24.595] (BSuccessful
I0814 09:10:24.596] message:apiVersion: extensions/v1beta1
I0814 09:10:24.596] kind: Deployment
... skipping 40 lines ...
I0814 09:10:24.676] deployment.apps "nginx" deleted
I0814 09:10:24.780] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:24.961] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:24.964] (BSuccessful
I0814 09:10:24.964] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 09:10:24.964] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 09:10:24.965] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:24.965] has:Object 'Kind' is missing
I0814 09:10:25.066] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:25.159] (BSuccessful
I0814 09:10:25.160] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:25.160] has:busybox0:busybox1:
I0814 09:10:25.161] Successful
I0814 09:10:25.162] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:25.162] has:Object 'Kind' is missing
I0814 09:10:25.264] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:25.363] (Bpod/busybox0 labeled
I0814 09:10:25.363] pod/busybox1 labeled
I0814 09:10:25.364] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:25.462] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 09:10:25.465] (BSuccessful
I0814 09:10:25.465] message:pod/busybox0 labeled
I0814 09:10:25.465] pod/busybox1 labeled
I0814 09:10:25.465] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:25.465] has:Object 'Kind' is missing
I0814 09:10:25.570] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:25.663] (Bpod/busybox0 patched
I0814 09:10:25.664] pod/busybox1 patched
I0814 09:10:25.664] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:25.758] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 09:10:25.760] (BSuccessful
I0814 09:10:25.760] message:pod/busybox0 patched
I0814 09:10:25.761] pod/busybox1 patched
I0814 09:10:25.761] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:25.761] has:Object 'Kind' is missing
I0814 09:10:25.856] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:26.045] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:26.047] (BSuccessful
I0814 09:10:26.047] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 09:10:26.047] pod "busybox0" force deleted
I0814 09:10:26.048] pod "busybox1" force deleted
I0814 09:10:26.048] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 09:10:26.048] has:Object 'Kind' is missing
I0814 09:10:26.136] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:26.290] (Breplicationcontroller/busybox0 created
I0814 09:10:26.309] replicationcontroller/busybox1 created
W0814 09:10:26.409] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 09:10:26.410] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 09:10:26.410] E0814 09:10:24.972918   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.410] E0814 09:10:25.074000   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.410] E0814 09:10:25.181969   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.411] E0814 09:10:25.295836   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.411] E0814 09:10:25.974844   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.411] E0814 09:10:26.075380   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.411] E0814 09:10:26.183395   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.411] E0814 09:10:26.307913   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:26.412] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 09:10:26.412] I0814 09:10:26.309564   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773821-14479", Name:"busybox0", UID:"b2db376d-4588-4c93-8f62-a9d5816fa68c", APIVersion:"v1", ResourceVersion:"979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cmqjj
W0814 09:10:26.412] I0814 09:10:26.321260   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773821-14479", Name:"busybox1", UID:"90ff9cfc-e106-4d29-84c1-df5980a766aa", APIVersion:"v1", ResourceVersion:"980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bxjjx
I0814 09:10:26.513] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:26.527] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:26.620] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 09:10:26.712] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 09:10:26.891] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 09:10:26.984] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 09:10:26.987] (BSuccessful
I0814 09:10:26.987] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 09:10:26.987] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 09:10:26.988] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:26.988] has:Object 'Kind' is missing
I0814 09:10:27.067] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 09:10:27.151] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0814 09:10:27.253] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:27.355] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 09:10:27.447] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 09:10:27.653] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 09:10:27.746] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 09:10:27.749] (BSuccessful
I0814 09:10:27.749] message:service/busybox0 exposed
I0814 09:10:27.749] service/busybox1 exposed
I0814 09:10:27.750] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:27.750] has:Object 'Kind' is missing
I0814 09:10:27.854] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:27.947] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 09:10:28.038] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 09:10:28.247] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 09:10:28.339] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 09:10:28.341] (BSuccessful
I0814 09:10:28.342] message:replicationcontroller/busybox0 scaled
I0814 09:10:28.342] replicationcontroller/busybox1 scaled
I0814 09:10:28.342] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:28.342] has:Object 'Kind' is missing
I0814 09:10:28.440] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:28.620] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:28.624] (BSuccessful
I0814 09:10:28.625] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 09:10:28.625] replicationcontroller "busybox0" force deleted
I0814 09:10:28.626] replicationcontroller "busybox1" force deleted
I0814 09:10:28.626] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:28.627] has:Object 'Kind' is missing
I0814 09:10:28.713] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:28.866] (Bdeployment.apps/nginx1-deployment created
I0814 09:10:28.871] deployment.apps/nginx0-deployment created
W0814 09:10:28.975] E0814 09:10:26.976130   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.976] E0814 09:10:27.077032   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.976] E0814 09:10:27.185078   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.976] E0814 09:10:27.309336   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.976] E0814 09:10:27.978045   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.977] E0814 09:10:28.078862   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.977] I0814 09:10:28.136565   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773821-14479", Name:"busybox0", UID:"b2db376d-4588-4c93-8f62-a9d5816fa68c", APIVersion:"v1", ResourceVersion:"999", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cgx79
W0814 09:10:28.977] I0814 09:10:28.145978   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773821-14479", Name:"busybox1", UID:"90ff9cfc-e106-4d29-84c1-df5980a766aa", APIVersion:"v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-57mzk
W0814 09:10:28.977] E0814 09:10:28.186202   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.978] E0814 09:10:28.310945   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:28.978] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 09:10:28.978] I0814 09:10:28.871025   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565773821-14479", Name:"nginx1-deployment", UID:"b14900c9-8a22-4c51-99b9-81998358ae1d", APIVersion:"apps/v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 09:10:28.979] I0814 09:10:28.875054   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565773821-14479", Name:"nginx0-deployment", UID:"f73ea11d-e666-423f-9b08-d1f607f4dde5", APIVersion:"apps/v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 09:10:28.979] I0814 09:10:28.875533   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx1-deployment-84f7f49fb7", UID:"e11ddeab-cf1e-4d0b-b911-74d72f2c9c15", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-r2rp5
W0814 09:10:28.980] I0814 09:10:28.880981   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx0-deployment-57475bf54d", UID:"f4970c42-dd92-4dfe-943e-6ce5856e225f", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-rkmrn
W0814 09:10:28.980] I0814 09:10:28.881878   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx1-deployment-84f7f49fb7", UID:"e11ddeab-cf1e-4d0b-b911-74d72f2c9c15", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-cxkmq
W0814 09:10:28.980] I0814 09:10:28.884844   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565773821-14479", Name:"nginx0-deployment-57475bf54d", UID:"f4970c42-dd92-4dfe-943e-6ce5856e225f", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-jzw82
W0814 09:10:28.981] E0814 09:10:28.979686   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:29.081] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 09:10:29.082] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 09:10:29.281] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 09:10:29.284] (BSuccessful
I0814 09:10:29.284] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 09:10:29.284] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 09:10:29.284] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 09:10:29.285] has:Object 'Kind' is missing
I0814 09:10:29.382] deployment.apps/nginx1-deployment paused
I0814 09:10:29.389] deployment.apps/nginx0-deployment paused
W0814 09:10:29.490] E0814 09:10:29.082648   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:29.491] E0814 09:10:29.187937   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:29.491] E0814 09:10:29.312460   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:29.591] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 09:10:29.592] (BSuccessful
I0814 09:10:29.592] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 09:10:29.592] has:Object 'Kind' is missing
I0814 09:10:29.592] deployment.apps/nginx1-deployment resumed
I0814 09:10:29.597] deployment.apps/nginx0-deployment resumed
... skipping 7 lines ...
I0814 09:10:29.815] 1         <none>
I0814 09:10:29.815] 
I0814 09:10:29.815] deployment.apps/nginx0-deployment 
I0814 09:10:29.815] REVISION  CHANGE-CAUSE
I0814 09:10:29.815] 1         <none>
I0814 09:10:29.815] 
I0814 09:10:29.816] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 09:10:29.816] has:nginx0-deployment
I0814 09:10:29.817] Successful
I0814 09:10:29.817] message:deployment.apps/nginx1-deployment 
I0814 09:10:29.817] REVISION  CHANGE-CAUSE
I0814 09:10:29.817] 1         <none>
I0814 09:10:29.817] 
I0814 09:10:29.817] deployment.apps/nginx0-deployment 
I0814 09:10:29.818] REVISION  CHANGE-CAUSE
I0814 09:10:29.818] 1         <none>
I0814 09:10:29.818] 
I0814 09:10:29.818] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 09:10:29.819] has:nginx1-deployment
I0814 09:10:29.819] Successful
I0814 09:10:29.820] message:deployment.apps/nginx1-deployment 
I0814 09:10:29.820] REVISION  CHANGE-CAUSE
I0814 09:10:29.820] 1         <none>
I0814 09:10:29.820] 
I0814 09:10:29.820] deployment.apps/nginx0-deployment 
I0814 09:10:29.820] REVISION  CHANGE-CAUSE
I0814 09:10:29.820] 1         <none>
I0814 09:10:29.820] 
I0814 09:10:29.821] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 09:10:29.821] has:Object 'Kind' is missing
I0814 09:10:29.901] deployment.apps "nginx1-deployment" force deleted
I0814 09:10:29.906] deployment.apps "nginx0-deployment" force deleted
W0814 09:10:30.006] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 09:10:30.007] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 09:10:30.007] E0814 09:10:29.981457   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:30.085] E0814 09:10:30.084346   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:30.191] E0814 09:10:30.190575   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:30.315] E0814 09:10:30.314415   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:30.983] E0814 09:10:30.982799   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:31.084] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:31.153] (Breplicationcontroller/busybox0 created
I0814 09:10:31.158] replicationcontroller/busybox1 created
W0814 09:10:31.258] E0814 09:10:31.085454   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:31.259] I0814 09:10:31.156915   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773821-14479", Name:"busybox0", UID:"1ec66db8-53b0-4fc0-9d60-9af3f50838bd", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-vx4nk
W0814 09:10:31.259] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 09:10:31.260] I0814 09:10:31.162461   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773821-14479", Name:"busybox1", UID:"7c247e35-cf34-4256-ba52-6e553bf67cb0", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-858q5
W0814 09:10:31.260] E0814 09:10:31.192215   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:31.316] E0814 09:10:31.316122   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:31.417] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 09:10:31.418] (BSuccessful
I0814 09:10:31.418] message:no rollbacker has been implemented for "ReplicationController"
I0814 09:10:31.418] no rollbacker has been implemented for "ReplicationController"
I0814 09:10:31.418] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.419] has:no rollbacker has been implemented for "ReplicationController"
I0814 09:10:31.419] Successful
I0814 09:10:31.419] message:no rollbacker has been implemented for "ReplicationController"
I0814 09:10:31.419] no rollbacker has been implemented for "ReplicationController"
I0814 09:10:31.419] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.419] has:Object 'Kind' is missing
I0814 09:10:31.467] Successful
I0814 09:10:31.468] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.468] error: replicationcontrollers "busybox0" pausing is not supported
I0814 09:10:31.468] error: replicationcontrollers "busybox1" pausing is not supported
I0814 09:10:31.468] has:Object 'Kind' is missing
I0814 09:10:31.470] Successful
I0814 09:10:31.472] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.472] error: replicationcontrollers "busybox0" pausing is not supported
I0814 09:10:31.472] error: replicationcontrollers "busybox1" pausing is not supported
I0814 09:10:31.472] has:replicationcontrollers "busybox0" pausing is not supported
I0814 09:10:31.473] Successful
I0814 09:10:31.474] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.474] error: replicationcontrollers "busybox0" pausing is not supported
I0814 09:10:31.474] error: replicationcontrollers "busybox1" pausing is not supported
I0814 09:10:31.474] has:replicationcontrollers "busybox1" pausing is not supported
I0814 09:10:31.575] Successful
I0814 09:10:31.576] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.576] error: replicationcontrollers "busybox0" resuming is not supported
I0814 09:10:31.576] error: replicationcontrollers "busybox1" resuming is not supported
I0814 09:10:31.576] has:Object 'Kind' is missing
I0814 09:10:31.577] Successful
I0814 09:10:31.578] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.578] error: replicationcontrollers "busybox0" resuming is not supported
I0814 09:10:31.578] error: replicationcontrollers "busybox1" resuming is not supported
I0814 09:10:31.578] has:replicationcontrollers "busybox0" resuming is not supported
I0814 09:10:31.580] Successful
I0814 09:10:31.580] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 09:10:31.580] error: replicationcontrollers "busybox0" resuming is not supported
I0814 09:10:31.581] error: replicationcontrollers "busybox1" resuming is not supported
I0814 09:10:31.581] has:replicationcontrollers "busybox0" resuming is not supported
I0814 09:10:31.652] replicationcontroller "busybox0" force deleted
I0814 09:10:31.657] replicationcontroller "busybox1" force deleted
W0814 09:10:31.758] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 09:10:31.759] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0814 09:10:31.985] E0814 09:10:31.984360   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:32.088] E0814 09:10:32.087314   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:32.195] E0814 09:10:32.193976   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:32.318] E0814 09:10:32.318091   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:32.669] Recording: run_namespace_tests
I0814 09:10:32.669] Running command: run_namespace_tests
I0814 09:10:32.692] 
I0814 09:10:32.695] +++ Running case: test-cmd.run_namespace_tests 
I0814 09:10:32.697] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:10:32.699] +++ command: run_namespace_tests
I0814 09:10:32.710] +++ [0814 09:10:32] Testing kubectl(v1:namespaces)
I0814 09:10:32.781] namespace/my-namespace created
I0814 09:10:32.882] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 09:10:32.956] (Bnamespace "my-namespace" deleted
W0814 09:10:33.057] E0814 09:10:32.986174   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:33.089] E0814 09:10:33.089167   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:33.196] E0814 09:10:33.195731   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:33.320] E0814 09:10:33.319896   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:33.988] E0814 09:10:33.987748   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:34.091] E0814 09:10:34.090816   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:34.198] E0814 09:10:34.197507   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:34.322] E0814 09:10:34.321781   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:34.991] E0814 09:10:34.990303   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:35.093] E0814 09:10:35.092754   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:35.200] E0814 09:10:35.199351   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:35.324] E0814 09:10:35.323508   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:35.992] E0814 09:10:35.991926   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:36.095] E0814 09:10:36.094374   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:36.201] E0814 09:10:36.200970   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:36.325] E0814 09:10:36.324950   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:36.994] E0814 09:10:36.993474   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:37.096] E0814 09:10:37.096034   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:37.203] E0814 09:10:37.202509   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:37.327] E0814 09:10:37.326473   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:37.995] E0814 09:10:37.994391   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:38.095] namespace/my-namespace condition met
I0814 09:10:38.134] Successful
I0814 09:10:38.134] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 09:10:38.134] has: not found
I0814 09:10:38.205] namespace/my-namespace created
I0814 09:10:38.300] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 09:10:38.506] (BSuccessful
I0814 09:10:38.507] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 09:10:38.507] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 09:10:38.510] namespace "namespace-1565773781-23877" deleted
I0814 09:10:38.510] namespace "namespace-1565773782-26173" deleted
I0814 09:10:38.510] namespace "namespace-1565773784-6245" deleted
I0814 09:10:38.510] namespace "namespace-1565773785-19742" deleted
I0814 09:10:38.510] namespace "namespace-1565773821-14479" deleted
I0814 09:10:38.510] namespace "namespace-1565773821-27581" deleted
I0814 09:10:38.510] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 09:10:38.511] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 09:10:38.511] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 09:10:38.511] has:warning: deleting cluster-scoped resources
I0814 09:10:38.511] Successful
I0814 09:10:38.511] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 09:10:38.511] namespace "kube-node-lease" deleted
I0814 09:10:38.511] namespace "my-namespace" deleted
I0814 09:10:38.511] namespace "namespace-1565773688-8549" deleted
... skipping 27 lines ...
I0814 09:10:38.514] namespace "namespace-1565773781-23877" deleted
I0814 09:10:38.515] namespace "namespace-1565773782-26173" deleted
I0814 09:10:38.515] namespace "namespace-1565773784-6245" deleted
I0814 09:10:38.515] namespace "namespace-1565773785-19742" deleted
I0814 09:10:38.515] namespace "namespace-1565773821-14479" deleted
I0814 09:10:38.515] namespace "namespace-1565773821-27581" deleted
I0814 09:10:38.515] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 09:10:38.516] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 09:10:38.516] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 09:10:38.516] has:namespace "my-namespace" deleted
W0814 09:10:38.617] E0814 09:10:38.097535   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:38.617] E0814 09:10:38.203858   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:38.617] E0814 09:10:38.327935   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:38.718] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 09:10:38.718] (Bnamespace/other created
I0814 09:10:38.801] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 09:10:38.894] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:39.062] (Bpod/valid-pod created
I0814 09:10:39.164] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 09:10:39.265] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 09:10:39.348] (BSuccessful
I0814 09:10:39.349] message:error: a resource cannot be retrieved by name across all namespaces
I0814 09:10:39.349] has:a resource cannot be retrieved by name across all namespaces
I0814 09:10:39.437] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 09:10:39.519] (Bpod "valid-pod" force deleted
W0814 09:10:39.620] E0814 09:10:38.995677   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:39.621] I0814 09:10:39.005577   53035 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 09:10:39.621] I0814 09:10:39.085245   53035 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 09:10:39.622] E0814 09:10:39.099237   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:39.622] I0814 09:10:39.106475   53035 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 09:10:39.622] I0814 09:10:39.185940   53035 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 09:10:39.622] E0814 09:10:39.205286   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:39.622] E0814 09:10:39.329387   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:39.623] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 09:10:39.723] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:39.723] (Bnamespace "other" deleted
W0814 09:10:39.998] E0814 09:10:39.997692   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:40.102] E0814 09:10:40.101310   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:40.207] E0814 09:10:40.206983   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:40.331] E0814 09:10:40.331122   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:40.999] E0814 09:10:40.999059   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:41.104] E0814 09:10:41.103648   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:41.209] E0814 09:10:41.208724   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:41.333] E0814 09:10:41.332997   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:41.788] I0814 09:10:41.787848   53035 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565773821-14479
W0814 09:10:41.792] I0814 09:10:41.791685   53035 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565773821-14479
W0814 09:10:42.001] E0814 09:10:42.000758   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:42.105] E0814 09:10:42.105261   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:42.210] E0814 09:10:42.209954   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:42.335] E0814 09:10:42.334672   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:43.003] E0814 09:10:43.002368   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:43.107] E0814 09:10:43.106553   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:43.212] E0814 09:10:43.211642   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:43.337] E0814 09:10:43.336279   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:44.004] E0814 09:10:44.003312   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:44.108] E0814 09:10:44.108214   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:44.213] E0814 09:10:44.212653   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:44.338] E0814 09:10:44.338049   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:44.835] +++ exit code: 0
I0814 09:10:44.875] Recording: run_secrets_test
I0814 09:10:44.875] Running command: run_secrets_test
I0814 09:10:44.899] 
I0814 09:10:44.901] +++ Running case: test-cmd.run_secrets_test 
I0814 09:10:44.904] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0814 09:10:46.800] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 09:10:46.872] (Bsecret "test-secret" deleted
I0814 09:10:46.950] secret/test-secret created
I0814 09:10:47.039] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 09:10:47.127] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 09:10:47.200] (Bsecret "test-secret" deleted
W0814 09:10:47.301] E0814 09:10:45.004321   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.302] E0814 09:10:45.109749   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.302] I0814 09:10:45.161438   69978 loader.go:375] Config loaded from file:  /tmp/tmp.ixtr2FkaWe/.kube/config
W0814 09:10:47.302] E0814 09:10:45.214215   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.302] E0814 09:10:45.340374   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.302] E0814 09:10:46.005888   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.302] E0814 09:10:46.110991   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.303] E0814 09:10:46.215381   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.303] E0814 09:10:46.341666   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.303] E0814 09:10:47.007287   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.303] E0814 09:10:47.112197   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.304] E0814 09:10:47.216344   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:47.343] E0814 09:10:47.342936   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:47.444] secret/secret-string-data created
I0814 09:10:47.447] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 09:10:47.532] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 09:10:47.617] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 09:10:47.687] (Bsecret "secret-string-data" deleted
I0814 09:10:47.779] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:10:47.941] (Bsecret "test-secret" deleted
I0814 09:10:48.029] namespace "test-secrets" deleted
W0814 09:10:48.130] E0814 09:10:48.008535   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:48.131] E0814 09:10:48.113677   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:48.218] E0814 09:10:48.217480   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:48.345] E0814 09:10:48.344287   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:49.010] E0814 09:10:49.010098   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:49.115] E0814 09:10:49.115243   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:49.220] E0814 09:10:49.219361   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:49.346] E0814 09:10:49.345795   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:50.014] E0814 09:10:50.013664   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:50.117] E0814 09:10:50.116528   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:50.221] E0814 09:10:50.220659   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:50.348] E0814 09:10:50.347504   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:51.015] E0814 09:10:51.015167   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:51.118] E0814 09:10:51.117775   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:51.222] E0814 09:10:51.222044   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:51.349] E0814 09:10:51.349008   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:52.017] E0814 09:10:52.016461   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:52.119] E0814 09:10:52.118997   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:52.224] E0814 09:10:52.223705   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:52.351] E0814 09:10:52.350623   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:53.018] E0814 09:10:53.018008   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:53.121] E0814 09:10:53.120427   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:53.221] +++ exit code: 0
I0814 09:10:53.222] Recording: run_configmap_tests
I0814 09:10:53.222] Running command: run_configmap_tests
I0814 09:10:53.222] 
I0814 09:10:53.222] +++ Running case: test-cmd.run_configmap_tests 
I0814 09:10:53.222] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:10:53.222] +++ command: run_configmap_tests
I0814 09:10:53.223] +++ [0814 09:10:53] Creating namespace namespace-1565773853-16884
I0814 09:10:53.263] namespace/namespace-1565773853-16884 created
I0814 09:10:53.330] Context "test" modified.
I0814 09:10:53.337] +++ [0814 09:10:53] Testing configmaps
W0814 09:10:53.438] E0814 09:10:53.224880   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:53.439] E0814 09:10:53.351742   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:53.539] configmap/test-configmap created
I0814 09:10:53.608] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 09:10:53.676] (Bconfigmap "test-configmap" deleted
I0814 09:10:53.767] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 09:10:53.833] (Bnamespace/test-configmaps created
I0814 09:10:53.920] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
... skipping 3 lines ...
I0814 09:10:54.232] configmap/test-binary-configmap created
I0814 09:10:54.327] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 09:10:54.408] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 09:10:54.630] (Bconfigmap "test-configmap" deleted
I0814 09:10:54.704] configmap "test-binary-configmap" deleted
I0814 09:10:54.779] namespace "test-configmaps" deleted
W0814 09:10:54.880] E0814 09:10:54.019314   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:54.880] E0814 09:10:54.121735   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:54.881] E0814 09:10:54.226163   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:54.881] E0814 09:10:54.353092   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:55.021] E0814 09:10:55.020649   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:55.123] E0814 09:10:55.123013   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:55.228] E0814 09:10:55.227877   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:55.355] E0814 09:10:55.354386   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:56.022] E0814 09:10:56.021955   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:56.124] E0814 09:10:56.124248   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:56.229] E0814 09:10:56.229279   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:56.356] E0814 09:10:56.355818   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:57.024] E0814 09:10:57.023621   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:57.126] E0814 09:10:57.125850   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:57.231] E0814 09:10:57.231038   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:57.358] E0814 09:10:57.357253   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:58.025] E0814 09:10:58.024959   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:58.128] E0814 09:10:58.127296   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:58.233] E0814 09:10:58.232416   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:58.359] E0814 09:10:58.359077   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:59.027] E0814 09:10:59.026545   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:59.129] E0814 09:10:59.128796   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:59.234] E0814 09:10:59.233883   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:10:59.361] E0814 09:10:59.360723   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:10:59.883] +++ exit code: 0
I0814 09:10:59.916] Recording: run_client_config_tests
I0814 09:10:59.916] Running command: run_client_config_tests
I0814 09:10:59.935] 
I0814 09:10:59.937] +++ Running case: test-cmd.run_client_config_tests 
I0814 09:10:59.940] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:10:59.943] +++ command: run_client_config_tests
I0814 09:10:59.957] +++ [0814 09:10:59] Creating namespace namespace-1565773859-1023
I0814 09:11:00.028] namespace/namespace-1565773859-1023 created
I0814 09:11:00.099] Context "test" modified.
I0814 09:11:00.106] +++ [0814 09:11:00] Testing client config
I0814 09:11:00.174] Successful
I0814 09:11:00.174] message:error: stat missing: no such file or directory
I0814 09:11:00.175] has:missing: no such file or directory
I0814 09:11:00.241] Successful
I0814 09:11:00.241] message:error: stat missing: no such file or directory
I0814 09:11:00.241] has:missing: no such file or directory
I0814 09:11:00.318] Successful
I0814 09:11:00.318] message:error: stat missing: no such file or directory
I0814 09:11:00.318] has:missing: no such file or directory
I0814 09:11:00.386] Successful
I0814 09:11:00.386] message:Error in configuration: context was not found for specified context: missing-context
I0814 09:11:00.386] has:context was not found for specified context: missing-context
I0814 09:11:00.455] Successful
I0814 09:11:00.456] message:error: no server found for cluster "missing-cluster"
I0814 09:11:00.456] has:no server found for cluster "missing-cluster"
I0814 09:11:00.532] Successful
I0814 09:11:00.532] message:error: auth info "missing-user" does not exist
I0814 09:11:00.532] has:auth info "missing-user" does not exist
W0814 09:11:00.633] E0814 09:11:00.028249   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:00.633] E0814 09:11:00.130218   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:00.634] E0814 09:11:00.235533   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:00.634] E0814 09:11:00.362136   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:00.734] Successful
I0814 09:11:00.735] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 09:11:00.735] has:error loading config file
I0814 09:11:00.743] Successful
I0814 09:11:00.743] message:error: stat missing-config: no such file or directory
I0814 09:11:00.744] has:no such file or directory
I0814 09:11:00.757] +++ exit code: 0
I0814 09:11:00.793] Recording: run_service_accounts_tests
I0814 09:11:00.794] Running command: run_service_accounts_tests
I0814 09:11:00.817] 
I0814 09:11:00.819] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0814 09:11:01.157] (Bnamespace/test-service-accounts created
I0814 09:11:01.255] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 09:11:01.322] (Bserviceaccount/test-service-account created
I0814 09:11:01.417] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 09:11:01.493] (Bserviceaccount "test-service-account" deleted
I0814 09:11:01.585] namespace "test-service-accounts" deleted
W0814 09:11:01.686] E0814 09:11:01.029839   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:01.687] E0814 09:11:01.131615   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:01.687] E0814 09:11:01.236829   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:01.687] E0814 09:11:01.363916   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:02.032] E0814 09:11:02.031690   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:02.133] E0814 09:11:02.133169   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:02.239] E0814 09:11:02.239056   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:02.366] E0814 09:11:02.365846   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:03.033] E0814 09:11:03.033173   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:03.135] E0814 09:11:03.134866   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:03.241] E0814 09:11:03.240803   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:03.368] E0814 09:11:03.367728   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:04.035] E0814 09:11:04.035024   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:04.137] E0814 09:11:04.136447   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:04.243] E0814 09:11:04.242338   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:04.370] E0814 09:11:04.369626   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:05.037] E0814 09:11:05.036821   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:05.139] E0814 09:11:05.139039   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:05.244] E0814 09:11:05.243873   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:05.371] E0814 09:11:05.370990   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:06.039] E0814 09:11:06.038838   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:06.140] E0814 09:11:06.140212   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:06.245] E0814 09:11:06.245202   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:06.373] E0814 09:11:06.372543   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:06.700] +++ exit code: 0
I0814 09:11:06.738] Recording: run_job_tests
I0814 09:11:06.738] Running command: run_job_tests
I0814 09:11:06.761] 
I0814 09:11:06.763] +++ Running case: test-cmd.run_job_tests 
I0814 09:11:06.766] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0814 09:11:07.585] Labels:                        run=pi
I0814 09:11:07.585] Annotations:                   <none>
I0814 09:11:07.585] Schedule:                      59 23 31 2 *
I0814 09:11:07.585] Concurrency Policy:            Allow
I0814 09:11:07.586] Suspend:                       False
I0814 09:11:07.586] Successful Job History Limit:  3
I0814 09:11:07.586] Failed Job History Limit:      1
I0814 09:11:07.586] Starting Deadline Seconds:     <unset>
I0814 09:11:07.587] Selector:                      <unset>
I0814 09:11:07.587] Parallelism:                   <unset>
I0814 09:11:07.587] Completions:                   <unset>
I0814 09:11:07.587] Pod Template:
I0814 09:11:07.588]   Labels:  run=pi
... skipping 32 lines ...
I0814 09:11:08.093]                 run=pi
I0814 09:11:08.093] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 09:11:08.093] Controlled By:  CronJob/pi
I0814 09:11:08.094] Parallelism:    1
I0814 09:11:08.094] Completions:    1
I0814 09:11:08.094] Start Time:     Wed, 14 Aug 2019 09:11:07 +0000
I0814 09:11:08.094] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 09:11:08.094] Pod Template:
I0814 09:11:08.094]   Labels:  controller-uid=4f9867b0-044c-4a61-8044-ab883d2c48f7
I0814 09:11:08.094]            job-name=test-job
I0814 09:11:08.094]            run=pi
I0814 09:11:08.094]   Containers:
I0814 09:11:08.094]    pi:
... skipping 15 lines ...
I0814 09:11:08.096]   Type    Reason            Age   From            Message
I0814 09:11:08.096]   ----    ------            ----  ----            -------
I0814 09:11:08.096]   Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-9r22q
I0814 09:11:08.173] job.batch "test-job" deleted
I0814 09:11:08.254] cronjob.batch "pi" deleted
I0814 09:11:08.334] namespace "test-jobs" deleted
W0814 09:11:08.434] E0814 09:11:07.040106   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.435] E0814 09:11:07.141537   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.435] E0814 09:11:07.247030   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.436] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 09:11:08.436] E0814 09:11:07.374122   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.436] I0814 09:11:07.834554   53035 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"4f9867b0-044c-4a61-8044-ab883d2c48f7", APIVersion:"batch/v1", ResourceVersion:"1350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-9r22q
W0814 09:11:08.436] E0814 09:11:08.041959   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.437] E0814 09:11:08.142879   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.437] E0814 09:11:08.248200   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:08.437] E0814 09:11:08.375625   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:09.044] E0814 09:11:09.043546   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:09.144] E0814 09:11:09.144263   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:09.250] E0814 09:11:09.249807   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:09.378] E0814 09:11:09.377480   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:10.046] E0814 09:11:10.045351   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:10.146] E0814 09:11:10.146077   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:10.252] E0814 09:11:10.251532   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:10.380] E0814 09:11:10.379229   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:11.047] E0814 09:11:11.046882   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:11.148] E0814 09:11:11.147755   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:11.254] E0814 09:11:11.253305   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:11.381] E0814 09:11:11.380743   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:12.049] E0814 09:11:12.048415   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:12.149] E0814 09:11:12.149048   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:12.255] E0814 09:11:12.254710   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:12.383] E0814 09:11:12.382288   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:13.051] E0814 09:11:13.050170   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:13.151] E0814 09:11:13.150880   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:13.256] E0814 09:11:13.256095   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:13.384] E0814 09:11:13.383465   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:13.484] +++ exit code: 0
I0814 09:11:13.498] Recording: run_create_job_tests
I0814 09:11:13.498] Running command: run_create_job_tests
I0814 09:11:13.519] 
I0814 09:11:13.522] +++ Running case: test-cmd.run_create_job_tests 
I0814 09:11:13.525] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 6 lines ...
I0814 09:11:13.940] (Bjob.batch "test-job" deleted
I0814 09:11:14.032] job.batch/test-job-pi created
I0814 09:11:14.133] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0814 09:11:14.211] (Bjob.batch "test-job-pi" deleted
W0814 09:11:14.312] I0814 09:11:13.763973   53035 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565773873-3621", Name:"test-job", UID:"dec1dc48-a4ae-40c5-9fd2-ff914dc3a763", APIVersion:"batch/v1", ResourceVersion:"1368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-6xb52
W0814 09:11:14.313] I0814 09:11:14.027387   53035 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565773873-3621", Name:"test-job-pi", UID:"115b6bec-33c7-4eb7-aa82-1ac56b97ac66", APIVersion:"batch/v1", ResourceVersion:"1375", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-292wk
W0814 09:11:14.314] E0814 09:11:14.051720   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:14.314] E0814 09:11:14.152342   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:14.314] E0814 09:11:14.257467   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:14.315] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 09:11:14.385] E0814 09:11:14.384845   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:14.425] I0814 09:11:14.424253   53035 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565773873-3621", Name:"my-pi", UID:"e0d6136f-897c-4499-9af7-fab31af97d81", APIVersion:"batch/v1", ResourceVersion:"1384", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-kttbm
I0814 09:11:14.525] cronjob.batch/test-pi created
I0814 09:11:14.526] job.batch/my-pi created
I0814 09:11:14.526] Successful
I0814 09:11:14.526] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0814 09:11:14.526] has:perl -Mbignum=bpi -wle print bpi(10)
... skipping 9 lines ...
I0814 09:11:14.819] +++ [0814 09:11:14] Creating namespace namespace-1565773874-32101
I0814 09:11:14.900] namespace/namespace-1565773874-32101 created
I0814 09:11:14.979] Context "test" modified.
I0814 09:11:14.991] +++ [0814 09:11:14] Testing pod templates
I0814 09:11:15.088] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:11:15.251] (Bpodtemplate/nginx created
W0814 09:11:15.351] E0814 09:11:15.053162   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:15.352] E0814 09:11:15.153829   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:15.352] I0814 09:11:15.247676   49603 controller.go:606] quota admission added evaluator for: podtemplates
W0814 09:11:15.352] E0814 09:11:15.258927   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:15.387] E0814 09:11:15.386338   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:15.487] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 09:11:15.488] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 09:11:15.488] nginx   nginx        nginx    name=nginx
I0814 09:11:15.653] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 09:11:15.733] (Bpodtemplate "nginx" deleted
I0814 09:11:15.829] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 5 lines ...
I0814 09:11:15.908] +++ working dir: /go/src/k8s.io/kubernetes
I0814 09:11:15.911] +++ command: run_service_tests
I0814 09:11:15.982] Context "test" modified.
I0814 09:11:15.991] +++ [0814 09:11:15] Testing kubectl(v1:services)
I0814 09:11:16.087] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 09:11:16.259] (Bservice/redis-master created
W0814 09:11:16.360] E0814 09:11:16.054973   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:16.360] E0814 09:11:16.155743   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:16.361] E0814 09:11:16.260355   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:16.388] E0814 09:11:16.387566   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:16.488] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 09:11:16.507] (Bcore.sh:864: Successful describe services redis-master:
I0814 09:11:16.507] Name:              redis-master
I0814 09:11:16.507] Namespace:         default
I0814 09:11:16.508] Labels:            app=redis
I0814 09:11:16.508]                    role=master
... skipping 301 lines ...
I0814 09:11:17.910]   selector:
I0814 09:11:17.910]     role: padawan
I0814 09:11:17.910]   sessionAffinity: None
I0814 09:11:17.911]   type: ClusterIP
I0814 09:11:17.911] status:
I0814 09:11:17.911]   loadBalancer: {}
W0814 09:11:18.011] E0814 09:11:17.056511   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:18.012] E0814 09:11:17.160040   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:18.012] E0814 09:11:17.261820   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:18.012] E0814 09:11:17.388910   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:18.012] error: you must specify resources by --filename when --local is set.
W0814 09:11:18.012] Example resource specifications include:
W0814 09:11:18.012]    '-f rsrc.yaml'
W0814 09:11:18.012]    '--filename=rsrc.json'
W0814 09:11:18.058] E0814 09:11:18.057938   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:18.159] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 09:11:18.242] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 09:11:18.331] (Bservice "redis-master" deleted
I0814 09:11:18.437] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 09:11:18.525] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 09:11:18.689] (Bservice/redis-master created
I0814 09:11:18.794] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 09:11:18.883] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 09:11:19.043] (Bservice/service-v1-test created
W0814 09:11:19.144] E0814 09:11:18.161752   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:19.145] E0814 09:11:18.263207   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:19.145] E0814 09:11:18.390036   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:19.145] E0814 09:11:19.059821   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:19.164] E0814 09:11:19.163410   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:19.264] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 09:11:19.319] (Bservice/service-v1-test replaced
I0814 09:11:19.423] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 09:11:19.504] (Bservice "redis-master" deleted
I0814 09:11:19.593] service "service-v1-test" deleted
I0814 09:11:19.693] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 09:11:19.786] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 09:11:19.936] (Bservice/redis-master created
W0814 09:11:20.037] E0814 09:11:19.265042   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:20.038] E0814 09:11:19.391523   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:20.062] E0814 09:11:20.061192   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:20.162] service/redis-slave created
I0814 09:11:20.218] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 09:11:20.308] (BSuccessful
I0814 09:11:20.309] message:NAME           RSRC
I0814 09:11:20.309] kubernetes     144
I0814 09:11:20.309] redis-master   1418
... skipping 54 lines ...
I0814 09:11:23.799] +++ [0814 09:11:23] Creating namespace namespace-1565773883-15678
I0814 09:11:23.867] namespace/namespace-1565773883-15678 created
I0814 09:11:23.938] Context "test" modified.
I0814 09:11:23.948] +++ [0814 09:11:23] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
I0814 09:11:24.039] apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:11:24.199] (Bdaemonset.apps/bind created
W0814 09:11:24.300] E0814 09:11:20.165220   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.301] E0814 09:11:20.266494   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.301] E0814 09:11:20.392781   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.301] E0814 09:11:21.062712   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.302] E0814 09:11:21.167287   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.302] E0814 09:11:21.268374   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.302] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 09:11:24.303] I0814 09:11:21.317652   53035 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"f317cd29-0acf-49c7-84a2-7fb303233e88", APIVersion:"apps/v1", ResourceVersion:"1434", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-6cdd84c77d to 2
W0814 09:11:24.303] I0814 09:11:21.324526   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"89e7cbe7-34ab-431f-9b14-3cb6264ef257", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-bnckn
W0814 09:11:24.304] I0814 09:11:21.328340   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-6cdd84c77d", UID:"89e7cbe7-34ab-431f-9b14-3cb6264ef257", APIVersion:"apps/v1", ResourceVersion:"1435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-6cdd84c77d-th6gm
W0814 09:11:24.304] E0814 09:11:21.394629   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.305] E0814 09:11:22.064449   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.305] E0814 09:11:22.169028   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.305] E0814 09:11:22.269761   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.305] E0814 09:11:22.396711   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.305] I0814 09:11:22.459653   49603 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0814 09:11:24.306] I0814 09:11:22.471066   49603 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0814 09:11:24.306] E0814 09:11:23.066104   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.306] E0814 09:11:23.170335   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.306] E0814 09:11:23.271246   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.306] E0814 09:11:23.397896   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.307] E0814 09:11:24.067896   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.307] E0814 09:11:24.171776   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.307] E0814 09:11:24.272688   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:24.399] E0814 09:11:24.399026   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:24.501] apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1565773883-15678"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
I0814 09:11:24.502]  kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
I0814 09:11:24.502] (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
I0814 09:11:24.503] apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 09:11:24.593] (Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 09:11:24.762] (Bdaemonset.apps/bind configured
... skipping 18 lines ...
I0814 09:11:25.471] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 09:11:25.563] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0814 09:11:25.667] (Bdaemonset.apps/bind rolled back
I0814 09:11:25.768] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 09:11:25.860] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 09:11:25.967] (BSuccessful
I0814 09:11:25.967] message:error: unable to find specified revision 1000000 in history
I0814 09:11:25.967] has:unable to find specified revision
I0814 09:11:26.062] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0814 09:11:26.157] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0814 09:11:26.262] (Bdaemonset.apps/bind rolled back
I0814 09:11:26.362] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0814 09:11:26.461] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 10 lines ...
I0814 09:11:26.832] namespace/namespace-1565773886-24443 created
I0814 09:11:26.908] Context "test" modified.
I0814 09:11:26.916] +++ [0814 09:11:26] Testing kubectl(v1:replicationcontrollers)
I0814 09:11:27.009] core.sh:1046: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:11:27.171] (Breplicationcontroller/frontend created
I0814 09:11:27.264] replicationcontroller "frontend" deleted
W0814 09:11:27.365] E0814 09:11:25.069390   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.366] E0814 09:11:25.173190   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.367] E0814 09:11:25.274036   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.368] E0814 09:11:25.400519   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.368] E0814 09:11:26.070870   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.368] E0814 09:11:26.174503   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.369] E0814 09:11:26.275085   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.373] E0814 09:11:26.281975   53035 daemon_controller.go:302] namespace-1565773883-15678/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1565773883-15678", SelfLink:"/apis/apps/v1/namespaces/namespace-1565773883-15678/daemonsets/bind", UID:"64e1c566-5eef-4379-b872-05fc1390c23d", ResourceVersion:"1505", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701370684, loc:(*time.Location)(0x7208260)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1565773883-15678\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001b6f020), Fields:(*v1.Fields)(0xc001b6f060)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001b6f0a0), Fields:(*v1.Fields)(0xc001b6f0e0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001b6f120), Fields:(*v1.Fields)(0xc001b6f160)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b6f1c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00218beb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d1fd40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001b6f200), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002bf8e70)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00218bf0c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0814 09:11:27.374] E0814 09:11:26.401725   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.374] E0814 09:11:27.072449   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.374] E0814 09:11:27.175725   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.375] I0814 09:11:27.178539   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773886-24443", Name:"frontend", UID:"103c0b84-be21-4751-8195-30b6e5223c94", APIVersion:"v1", ResourceVersion:"1514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dbtkt
W0814 09:11:27.375] I0814 09:11:27.182954   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773886-24443", Name:"frontend", UID:"103c0b84-be21-4751-8195-30b6e5223c94", APIVersion:"v1", ResourceVersion:"1514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n6dkq
W0814 09:11:27.375] I0814 09:11:27.183264   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773886-24443", Name:"frontend", UID:"103c0b84-be21-4751-8195-30b6e5223c94", APIVersion:"v1", ResourceVersion:"1514", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9ns8f
W0814 09:11:27.376] E0814 09:11:27.276283   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 09:11:27.404] E0814 09:11:27.403260   53035 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 09:11:27.505] core.sh:1051: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:11:27.505] (Bcore.sh:1055: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 09:11:27.643] (Breplicationcontroller/frontend created
W0814 09:11:27.744] I0814 09:11:27.648911   53035 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565773886-24443", Name:"frontend", UID:"5bc58521-4d8b-4372-a1a1-f22fd6fac6f0", APIVersion:"v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'Successfu