This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: use named array instead of array in normalizing score
ResultFAILURE
Tests 1 failed / 2470 succeeded
Started2019-08-14 05:45
Elapsed30m47s
Revision
Buildergke-prow-ssd-pool-1a225945-7tph
Refs master:a520302f
80901:aa5f9fda
pod92153adb-be56-11e9-ac8f-6e56e203dc81
infra-commit89e6e9743
pod92153adb-be56-11e9-ac8f-6e56e203dc81
repok8s.io/kubernetes
repo-commit11b635fd98189f524c06c025687efc0fe976b5ff
repos{u'k8s.io/kubernetes': u'master:a520302fb4673e595fcb70d2a4db26598371be92,80901:aa5f9fda52d0171e45682254e0d37b16f58ae6fc'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 06:11:21.643384  109502 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 06:11:21.643419  109502 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 06:11:21.643432  109502 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 06:11:21.643460  109502 master.go:234] Using reconciler: 
I0814 06:11:21.646117  109502 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.646239  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.646326  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.646378  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.646451  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.646859  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.646998  109502 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 06:11:21.647031  109502 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.647218  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.647230  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.647265  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.647335  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.647378  109502 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 06:11:21.647632  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.647982  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.648077  109502 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 06:11:21.648104  109502 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.648169  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.648181  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.648209  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.648258  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.648288  109502 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 06:11:21.648506  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.648840  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.648924  109502 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 06:11:21.648953  109502 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.649015  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.649014  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.649026  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.649068  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.649087  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.649102  109502 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 06:11:21.649370  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.649579  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.649599  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.649695  109502 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 06:11:21.649827  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.649881  109502 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 06:11:21.650900  109502 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.650982  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.651006  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.651038  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.651099  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.652399  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.652832  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.652950  109502 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 06:11:21.653086  109502 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.653147  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.653157  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.653185  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.653224  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.653253  109502 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 06:11:21.653610  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.653826  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.653935  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.654043  109502 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 06:11:21.654074  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.654108  109502 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 06:11:21.654183  109502 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.654244  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.654254  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.654284  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.654333  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.654606  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.654710  109502 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 06:11:21.654887  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.654903  109502 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.654960  109502 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 06:11:21.654977  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.654987  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.655014  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.655109  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.655337  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.655427  109502 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 06:11:21.655535  109502 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.655577  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.655586  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.655614  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.655646  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.655674  109502 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 06:11:21.655892  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.656162  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.656253  109502 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 06:11:21.656374  109502 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.656440  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.656451  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.656514  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.656558  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.656591  109502 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 06:11:21.656815  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.657107  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.657189  109502 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 06:11:21.657331  109502 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.657415  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.657438  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.657469  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.657513  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.657544  109502 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 06:11:21.657665  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.658247  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.658361  109502 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 06:11:21.658496  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.658582  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.658594  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.658625  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.658668  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.658705  109502 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 06:11:21.658952  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.659209  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.659670  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.659838  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.660077  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.660213  109502 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 06:11:21.660370  109502 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.660421  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.660446  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.660456  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.660484  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.660527  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.660555  109502 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 06:11:21.660758  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.660807  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.660969  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.661269  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.661359  109502 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 06:11:21.661483  109502 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.661553  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.661563  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.661594  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.661645  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.661705  109502 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 06:11:21.661906  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.662187  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.662281  109502 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 06:11:21.662309  109502 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.662390  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.662401  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.662429  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.662525  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.662554  109502 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 06:11:21.662715  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.663219  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.663366  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.664184  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.664274  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.664283  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.664305  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.664350  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.664387  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.664684  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.665179  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.665317  109502 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.665377  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.665386  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.665412  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.665450  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.665487  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.665812  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.665975  109502 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 06:11:21.666637  109502 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.666912  109502 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.667224  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.667287  109502 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 06:11:21.667787  109502 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.668489  109502 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.669403  109502 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.670097  109502 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.670520  109502 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.670662  109502 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.670879  109502 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.671333  109502 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.671422  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.671504  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.671961  109502 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.672165  109502 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.672965  109502 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.673245  109502 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.673812  109502 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.674049  109502 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.674853  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.675059  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.675200  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.675322  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.675501  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.675658  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.675861  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.676563  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.676843  109502 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.677621  109502 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.678385  109502 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.678634  109502 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.678862  109502 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.679427  109502 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.679642  109502 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.680220  109502 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.681090  109502 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.681614  109502 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.682332  109502 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.682569  109502 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.682720  109502 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 06:11:21.682747  109502 master.go:434] Enabling API group "authentication.k8s.io".
I0814 06:11:21.682767  109502 master.go:434] Enabling API group "authorization.k8s.io".
I0814 06:11:21.682947  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.683067  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.683082  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.683163  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.683276  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.683809  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.683940  109502 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 06:11:21.684052  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.684126  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.684137  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.684137  109502 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 06:11:21.684165  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.684066  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.684312  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.684551  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.684695  109502 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 06:11:21.684898  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.684973  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.684982  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.685011  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.685067  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.685100  109502 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 06:11:21.685700  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.686180  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.687358  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.688461  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.688835  109502 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 06:11:21.688868  109502 master.go:434] Enabling API group "autoscaling".
I0814 06:11:21.688928  109502 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 06:11:21.689015  109502 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.689090  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.689101  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.689138  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.689181  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.689783  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.689919  109502 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 06:11:21.690053  109502 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.690122  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.690134  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.690166  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.690207  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.690242  109502 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 06:11:21.690471  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.690739  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.691167  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.691467  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.691887  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.692148  109502 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 06:11:21.692270  109502 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 06:11:21.692300  109502 master.go:434] Enabling API group "batch".
I0814 06:11:21.692571  109502 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.692812  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.692835  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.692882  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.693012  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.693186  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.693555  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.693818  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.693892  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.693943  109502 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 06:11:21.693965  109502 master.go:434] Enabling API group "certificates.k8s.io".
I0814 06:11:21.694072  109502 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 06:11:21.694112  109502 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.694185  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.694198  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.694230  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.694275  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.694606  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.694665  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.694731  109502 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 06:11:21.694835  109502 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 06:11:21.694882  109502 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.694979  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.694991  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.695047  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.695091  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.696192  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.696260  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.696207  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.696379  109502 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 06:11:21.696423  109502 master.go:434] Enabling API group "coordination.k8s.io".
I0814 06:11:21.696451  109502 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 06:11:21.696666  109502 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.696747  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.696757  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.696814  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.696934  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.697230  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.697329  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.697355  109502 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 06:11:21.697405  109502 master.go:434] Enabling API group "extensions".
I0814 06:11:21.697454  109502 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 06:11:21.697558  109502 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.697628  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.697640  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.697674  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.697727  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.697962  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.698057  109502 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 06:11:21.698207  109502 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.698260  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.698276  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.698287  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.698338  109502 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 06:11:21.698600  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.698673  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.699018  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.699132  109502 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 06:11:21.699149  109502 master.go:434] Enabling API group "networking.k8s.io".
I0814 06:11:21.699180  109502 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.699243  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.699254  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.699287  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.699329  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.699358  109502 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 06:11:21.699592  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.699920  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.700084  109502 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 06:11:21.700118  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.700165  109502 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 06:11:21.700219  109502 master.go:434] Enabling API group "node.k8s.io".
I0814 06:11:21.700878  109502 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.701094  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.701337  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.701460  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.701709  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.701899  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.702643  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.704477  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.704513  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.704757  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.704982  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.708376  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.708497  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.708727  109502 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 06:11:21.708916  109502 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.709035  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.709048  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.709084  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.709162  109502 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 06:11:21.709336  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.709681  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.709881  109502 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 06:11:21.709932  109502 master.go:434] Enabling API group "policy".
I0814 06:11:21.709968  109502 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.710038  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.710049  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.710080  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.710082  109502 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 06:11:21.710169  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.710467  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.710559  109502 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 06:11:21.710696  109502 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.710766  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.710793  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.710820  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.710847  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.710858  109502 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 06:11:21.710715  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.710995  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.711341  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.711467  109502 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 06:11:21.711505  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.711499  109502 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.711542  109502 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 06:11:21.711565  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.711576  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.711635  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.711751  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.712000  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.712082  109502 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 06:11:21.712145  109502 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 06:11:21.712217  109502 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.712276  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.712286  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.712315  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.712400  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.712619  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.712715  109502 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 06:11:21.712754  109502 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.712835  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.712846  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.712881  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.712956  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.712999  109502 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 06:11:21.713228  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.713527  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.713573  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.713603  109502 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 06:11:21.713728  109502 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.713815  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.713829  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.713861  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.713911  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.714093  109502 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 06:11:21.714311  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.715151  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.715266  109502 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 06:11:21.715296  109502 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.715363  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.715374  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.715404  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.715447  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.715477  109502 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 06:11:21.715703  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.716055  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.716650  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.716717  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.716748  109502 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 06:11:21.716944  109502 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.717014  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.717025  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.717056  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.717107  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.717136  109502 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 06:11:21.717395  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.717758  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.717770  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.717950  109502 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 06:11:21.718006  109502 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 06:11:21.712069  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.719402  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.719556  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.719842  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.719895  109502 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 06:11:21.720057  109502 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.720135  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.720146  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.720174  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.720223  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.720243  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.720544  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.720649  109502 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 06:11:21.720817  109502 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.720884  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.720894  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.720896  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.720923  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.721002  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.721033  109502 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 06:11:21.721257  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.721487  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.721532  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.721578  109502 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 06:11:21.721642  109502 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 06:11:21.721790  109502 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 06:11:21.721835  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.721933  109502 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.721998  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.722008  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.722037  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.722073  109502 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 06:11:21.722343  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.722582  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.722635  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.722679  109502 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 06:11:21.722848  109502 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.722901  109502 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 06:11:21.722925  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.722935  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.722960  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.723094  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.723246  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.723891  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.724202  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.724325  109502 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 06:11:21.724354  109502 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.724414  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.724424  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.724454  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.724495  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.724557  109502 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 06:11:21.724851  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.725164  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.725257  109502 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 06:11:21.725264  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.725287  109502 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.725351  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.725363  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.725394  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.725465  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.725528  109502 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 06:11:21.725548  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.725740  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.725761  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.725936  109502 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 06:11:21.726108  109502 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.726192  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.726206  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.726245  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.726340  109502 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 06:11:21.726769  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.727050  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.727160  109502 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 06:11:21.727221  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.727295  109502 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.727366  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.727376  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.727407  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.727462  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.727493  109502 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 06:11:21.727579  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.727742  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.728053  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.728137  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.728185  109502 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 06:11:21.728204  109502 master.go:434] Enabling API group "storage.k8s.io".
I0814 06:11:21.728344  109502 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.728436  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.728447  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.728465  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.728476  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.728490  109502 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 06:11:21.728657  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.728926  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.729059  109502 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 06:11:21.729228  109502 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.729293  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.729304  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.729334  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.729393  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.729418  109502 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 06:11:21.729514  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.729736  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.729759  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.729870  109502 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 06:11:21.730001  109502 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.730042  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.730065  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.730075  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.730151  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.730203  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.730245  109502 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 06:11:21.730474  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.730881  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.730991  109502 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 06:11:21.731045  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.731124  109502 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.731188  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.731198  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.731229  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.731273  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.731304  109502 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 06:11:21.731548  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.731641  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.731891  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.731926  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.731996  109502 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 06:11:21.732114  109502 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 06:11:21.732136  109502 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.732200  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.732211  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.732242  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.732336  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.732561  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.732646  109502 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 06:11:21.732664  109502 master.go:434] Enabling API group "apps".
I0814 06:11:21.732694  109502 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.732748  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.732757  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.732838  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.732867  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.732879  109502 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 06:11:21.733008  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.733355  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.734343  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.734369  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.734469  109502 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 06:11:21.734493  109502 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.734758  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.734831  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.734888  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.735093  109502 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 06:11:21.736636  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.736651  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.736682  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.736729  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.737005  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.737025  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.737173  109502 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 06:11:21.737223  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.737202  109502 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.737268  109502 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 06:11:21.737281  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.737290  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.737321  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.737462  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.737733  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.737832  109502 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 06:11:21.737857  109502 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.737921  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.737930  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.737958  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.737993  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.738018  109502 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 06:11:21.738239  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.738455  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.738491  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.738551  109502 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 06:11:21.738562  109502 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 06:11:21.738591  109502 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.738604  109502 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 06:11:21.738788  109502 client.go:354] parsed scheme: ""
I0814 06:11:21.738799  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:21.738825  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:21.738911  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.741852  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.742227  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.742551  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.742840  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:21.742966  109502 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 06:11:21.742979  109502 master.go:434] Enabling API group "events.k8s.io".
I0814 06:11:21.743045  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:21.743117  109502 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 06:11:21.743198  109502 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.743376  109502 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.743625  109502 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.743725  109502 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.743841  109502 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.743940  109502 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.744129  109502 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.744238  109502 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.744315  109502 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.744392  109502 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.745372  109502 watch_cache.go:405] Replace watchCache (rev: 28494) 
I0814 06:11:21.745510  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.745763  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.746759  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.747297  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.748316  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.748683  109502 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.749558  109502 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.749928  109502 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.750669  109502 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.751098  109502 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:11:21.751239  109502 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 06:11:21.751880  109502 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.752069  109502 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.752341  109502 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.753197  109502 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.754145  109502 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.755474  109502 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.757176  109502 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.758135  109502 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.758912  109502 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.759283  109502 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.760115  109502 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:11:21.761742  109502 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 06:11:21.762824  109502 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.763224  109502 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.764542  109502 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.765417  109502 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.766145  109502 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.766956  109502 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.768063  109502 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.768662  109502 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.769177  109502 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.770111  109502 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.771039  109502 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:11:21.771287  109502 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 06:11:21.772203  109502 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.773051  109502 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:11:21.773125  109502 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 06:11:21.773726  109502 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.774684  109502 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.774931  109502 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.775680  109502 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.776171  109502 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.776755  109502 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.777404  109502 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:11:21.777491  109502 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 06:11:21.778364  109502 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.779131  109502 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.779544  109502 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.780280  109502 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.780532  109502 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.780881  109502 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.781755  109502 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.782013  109502 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.782242  109502 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.783539  109502 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.783809  109502 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.784142  109502 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 06:11:21.784237  109502 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 06:11:21.784255  109502 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 06:11:21.785115  109502 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.785853  109502 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.786630  109502 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.787251  109502 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.788096  109502 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"b45a059f-e52d-4e07-a1b9-6a9376fa7e0f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 06:11:21.791167  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:21.791197  109502 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 06:11:21.791209  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:21.791219  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:21.791229  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:21.791238  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:21.791269  109502 httplog.go:90] GET /healthz: (238.132µs) 0 [Go-http-client/1.1 127.0.0.1:59986]
I0814 06:11:21.792329  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.2422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59988]
I0814 06:11:21.797935  109502 httplog.go:90] GET /api/v1/services: (1.302168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59988]
I0814 06:11:21.806685  109502 httplog.go:90] GET /api/v1/services: (5.757886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59988]
I0814 06:11:21.809742  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:21.809783  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:21.809797  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:21.809806  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:21.809813  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:21.809841  109502 httplog.go:90] GET /healthz: (203.758µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
E0814 06:11:21.818494  109502 factory.go:599] Error getting pod permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/test-pod for retry: Get http://127.0.0.1:46721/api/v1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/pods/test-pod: dial tcp 127.0.0.1:46721: connect: connection refused; retrying...
I0814 06:11:21.830790  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (20.840282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59988]
I0814 06:11:21.836756  109502 httplog.go:90] GET /api/v1/services: (5.613508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:21.836794  109502 httplog.go:90] POST /api/v1/namespaces: (5.513007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59988]
I0814 06:11:21.836989  109502 httplog.go:90] GET /api/v1/services: (5.900426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
I0814 06:11:21.840876  109502 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.545005ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59988]
I0814 06:11:21.843094  109502 httplog.go:90] POST /api/v1/namespaces: (1.813995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:21.844472  109502 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.064146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:21.846384  109502 httplog.go:90] POST /api/v1/namespaces: (1.604009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:21.892600  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:21.892640  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:21.892653  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:21.892664  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:21.892673  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:21.892720  109502 httplog.go:90] GET /healthz: (265.993µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:21.911111  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:21.911143  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:21.911157  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:21.911167  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:21.911175  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:21.911210  109502 httplog.go:90] GET /healthz: (265.759µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:21.992587  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:21.992630  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:21.992644  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:21.992653  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:21.992662  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:21.992695  109502 httplog.go:90] GET /healthz: (257.823µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.011037  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.011086  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.011098  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.011116  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.011142  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.011176  109502 httplog.go:90] GET /healthz: (294.524µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.092551  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.092587  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.092599  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.092610  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.092617  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.092648  109502 httplog.go:90] GET /healthz: (236.944µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.110994  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.111036  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.111049  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.111059  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.111067  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.111101  109502 httplog.go:90] GET /healthz: (254.218µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.192571  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.192615  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.192631  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.192658  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.192668  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.192703  109502 httplog.go:90] GET /healthz: (279.441µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.211111  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.211168  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.211183  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.211194  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.211202  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.211240  109502 httplog.go:90] GET /healthz: (292.384µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.292559  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.292609  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.292623  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.292633  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.292642  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.292671  109502 httplog.go:90] GET /healthz: (253.687µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.311060  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.311096  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.311110  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.311120  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.311128  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.311154  109502 httplog.go:90] GET /healthz: (230.226µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.392567  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.392617  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.392630  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.392639  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.392646  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.392678  109502 httplog.go:90] GET /healthz: (274.097µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.411033  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.411071  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.411083  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.411093  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.411101  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.411135  109502 httplog.go:90] GET /healthz: (247.931µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.492597  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.492636  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.492649  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.492660  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.492667  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.492711  109502 httplog.go:90] GET /healthz: (256.729µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.511074  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.511112  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.511126  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.511136  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.511144  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.511176  109502 httplog.go:90] GET /healthz: (277.891µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.592615  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.592661  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.592674  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.592686  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.592694  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.592724  109502 httplog.go:90] GET /healthz: (259.708µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.611045  109502 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 06:11:22.611082  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.611095  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.611105  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.611114  109502 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.611141  109502 httplog.go:90] GET /healthz: (236.676µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.642654  109502 client.go:354] parsed scheme: ""
I0814 06:11:22.642687  109502 client.go:354] scheme "" not registered, fallback to default scheme
I0814 06:11:22.642734  109502 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 06:11:22.642915  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:22.643701  109502 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 06:11:22.643767  109502 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 06:11:22.693764  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.693808  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.693820  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.693828  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.693868  109502 httplog.go:90] GET /healthz: (1.423571ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:22.712443  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.712474  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.712486  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.712494  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.712529  109502 httplog.go:90] GET /healthz: (1.335223ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.792980  109502 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.57795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
I0814 06:11:22.794473  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.35027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60114]
I0814 06:11:22.795122  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.795143  109502 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 06:11:22.795154  109502 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 06:11:22.795163  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 06:11:22.795192  109502 httplog.go:90] GET /healthz: (2.324618ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:22.795407  109502 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.012542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
I0814 06:11:22.795553  109502 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 06:11:22.796752  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.928054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60114]
I0814 06:11:22.796905  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.64506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.798456  109502 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.578529ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
I0814 06:11:22.798654  109502 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.372527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.798843  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.451115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.801791  109502 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.076143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
I0814 06:11:22.801969  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.606612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.802069  109502 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.77227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.802218  109502 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 06:11:22.802232  109502 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 06:11:22.803517  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.213382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59986]
I0814 06:11:22.804833  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (908.437µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.806171  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.014269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.807407  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (764.252µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.808479  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (748.115µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.809582  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (643.628µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.812513  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.543906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.812633  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.812648  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:22.812670  109502 httplog.go:90] GET /healthz: (1.956617ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.812823  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 06:11:22.814092  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.126607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.816072  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.70098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.816328  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 06:11:22.817251  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (736.426µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.819073  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.541544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.819230  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 06:11:22.820390  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.05191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.822663  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.979894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.822835  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 06:11:22.823755  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (774.386µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.825510  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.825683  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 06:11:22.828364  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.541238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.830273  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.622638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.830440  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 06:11:22.831662  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.10989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.833682  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.564993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.833877  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 06:11:22.835008  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (913.735µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.837763  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.35008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.838254  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 06:11:22.839415  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (990.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.841594  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.841974  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 06:11:22.843528  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.364908ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.845945  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.03173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.846230  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 06:11:22.847141  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (772.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.848906  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.417786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.849222  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 06:11:22.850057  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (703.047µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.852214  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.69157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.852480  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 06:11:22.853600  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (928.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.855569  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.667279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.855729  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 06:11:22.856307  109502 cacher.go:763] cacher (*rbac.ClusterRole): 1 objects queued in incoming channel.
I0814 06:11:22.857032  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.089927ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.859186  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.783259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.859438  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 06:11:22.860507  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (854.655µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.862358  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.862797  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 06:11:22.863723  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (754.946µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.869358  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.325511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.869793  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 06:11:22.870959  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (930.626µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.872910  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.573355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.873184  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 06:11:22.874250  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (926.65µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.876127  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.876302  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 06:11:22.877284  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (795.528µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.879263  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.601402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.879519  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 06:11:22.881061  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.084081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.883005  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.573751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.883455  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 06:11:22.884503  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (869.994µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.886050  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.190683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.886260  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 06:11:22.887259  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (708.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.888977  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.889189  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 06:11:22.890343  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (814.285µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.893073  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.893115  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:22.893170  109502 httplog.go:90] GET /healthz: (886.114µs) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:22.893426  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.667906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.893563  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 06:11:22.894477  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (741.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.896128  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.288599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.896287  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 06:11:22.897478  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (965.366µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.899289  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.899469  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 06:11:22.900530  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (890.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.902582  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.535116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.903003  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 06:11:22.904534  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.116458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.906365  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.906548  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 06:11:22.907524  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (692.087µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.909017  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.085292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.909254  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 06:11:22.910164  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (735.198µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.912262  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.912284  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:22.912316  109502 httplog.go:90] GET /healthz: (1.382612ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:22.912416  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.841794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.912564  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 06:11:22.913781  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (713.612µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.915278  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.915470  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 06:11:22.916267  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (643.505µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.919997  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.056781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.920236  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 06:11:22.921674  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.218766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.923783  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.635895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.924298  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 06:11:22.925658  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.037056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.928103  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.645197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.928662  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 06:11:22.929645  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (758.371µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.932087  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.968599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.932566  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 06:11:22.933510  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (772.065µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.935258  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.935468  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 06:11:22.936351  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (694.853µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.938030  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.359353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.938200  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 06:11:22.939184  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (863.337µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.940898  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.39064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.941154  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 06:11:22.942109  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (792.48µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.943731  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.943951  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 06:11:22.945117  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (820.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.946974  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.535534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.947186  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 06:11:22.948349  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.014475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.950149  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.951555  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 06:11:22.957227  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (5.506024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.959051  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.402675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.959438  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 06:11:22.960698  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (854.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.962431  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.962623  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 06:11:22.963601  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (766.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.965931  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.966121  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 06:11:22.967028  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (776.613µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.969174  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.793183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.969686  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 06:11:22.970810  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (963.873µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.973199  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.949593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.975110  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 06:11:22.976184  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (921.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.978204  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.709241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.978386  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 06:11:22.982116  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (3.588102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.984384  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.88448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.984703  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 06:11:22.985717  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (844.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.987476  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.432203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.987701  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 06:11:22.988641  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (722.24µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.991406  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.327364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.991737  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 06:11:22.993037  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.001698ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.995318  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.993728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.995500  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 06:11:22.996875  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.193793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:22.998745  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:22.998984  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:22.999027  109502 httplog.go:90] GET /healthz: (1.433234ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:23.013465  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.413824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.013705  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 06:11:23.015474  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.015730  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.016110  109502 httplog.go:90] GET /healthz: (1.895383ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.032973  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.803412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.053651  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.439699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.054043  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 06:11:23.072529  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.389471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.094030  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.735182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.094646  109502 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 06:11:23.096029  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.096060  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.097135  109502 httplog.go:90] GET /healthz: (2.748693ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:23.112733  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.112771  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.112829  109502 httplog.go:90] GET /healthz: (1.368681ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.112887  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.890548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.133617  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.46645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.133907  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 06:11:23.152483  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.390822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.174376  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.850935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.174633  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 06:11:23.195634  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.195667  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.195711  109502 httplog.go:90] GET /healthz: (3.349735ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:23.196419  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (5.129489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.214534  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.214569  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.214622  109502 httplog.go:90] GET /healthz: (2.49714ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.215281  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.219575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.215554  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 06:11:23.232664  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.452451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.255504  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.187768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.255866  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 06:11:23.273210  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.958885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.293982  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.638377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.294241  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 06:11:23.297305  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.297331  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.297387  109502 httplog.go:90] GET /healthz: (1.227987ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:23.312728  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.721851ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.312927  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.312948  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.312979  109502 httplog.go:90] GET /healthz: (2.130299ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.333536  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.421862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.333831  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 06:11:23.352493  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.380868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.373682  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.515179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.374291  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 06:11:23.393489  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.067488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.395018  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.395045  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.395079  109502 httplog.go:90] GET /healthz: (2.119472ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:23.414662  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.40455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.415463  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 06:11:23.415854  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.416149  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.416541  109502 httplog.go:90] GET /healthz: (4.589183ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.432411  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.329239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.453462  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.28963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.453824  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 06:11:23.472581  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.452919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.493212  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.903291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.493444  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.493466  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.493523  109502 httplog.go:90] GET /healthz: (1.172232ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:23.494009  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 06:11:23.512704  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.691433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.512890  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.512910  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.512939  109502 httplog.go:90] GET /healthz: (2.059148ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.533927  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.747621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.534215  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 06:11:23.552679  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.543205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.574002  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.733182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.574640  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 06:11:23.592733  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.490248ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.594022  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.594049  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.594097  109502 httplog.go:90] GET /healthz: (1.346216ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:23.614248  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.803278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.614479  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 06:11:23.614717  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.614749  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.614810  109502 httplog.go:90] GET /healthz: (3.956994ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.632645  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.441964ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.653685  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.552968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.654065  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 06:11:23.672504  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.389095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.694945  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.620741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.695109  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.695140  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.695165  109502 httplog.go:90] GET /healthz: (2.904777ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:23.695403  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 06:11:23.714516  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.714551  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.714602  109502 httplog.go:90] GET /healthz: (3.639931ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.714887  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (3.462906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.735541  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.38726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.735815  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 06:11:23.752393  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.251505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.773610  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.42289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.774168  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 06:11:23.793467  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.331313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.796404  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.796434  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.796493  109502 httplog.go:90] GET /healthz: (3.5547ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:23.814565  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.814598  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.814642  109502 httplog.go:90] GET /healthz: (3.732058ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.815307  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.85563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.815536  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 06:11:23.832930  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.723847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.855943  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.746651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.856229  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 06:11:23.872379  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.194192ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.895365  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.895402  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.895444  109502 httplog.go:90] GET /healthz: (3.057686ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:23.897126  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.788047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.897376  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 06:11:23.914981  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.915019  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.915063  109502 httplog.go:90] GET /healthz: (4.229421ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:23.915125  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (3.702838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.934054  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.905127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.934341  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 06:11:23.952571  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.392858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.978159  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.169841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.978562  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 06:11:23.992787  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.63445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:23.993190  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:23.993229  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:23.993254  109502 httplog.go:90] GET /healthz: (968.07µs) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:24.012252  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.012288  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.012326  109502 httplog.go:90] GET /healthz: (1.469758ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.013052  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.573562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.013374  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 06:11:24.032792  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.649961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.053632  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.475093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.053934  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 06:11:24.072339  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.209957ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.099264  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.099292  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.099333  109502 httplog.go:90] GET /healthz: (3.639709ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:24.100051  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.940972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.100282  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 06:11:24.111933  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.111963  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.111998  109502 httplog.go:90] GET /healthz: (1.136374ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.113705  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.218539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.134135  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.930412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.134593  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 06:11:24.152305  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.173439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.173830  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.727546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.174103  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 06:11:24.193956  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.80759ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.197414  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.197448  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.197494  109502 httplog.go:90] GET /healthz: (4.015356ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:24.214957  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.892901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.215625  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.215646  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.215683  109502 httplog.go:90] GET /healthz: (4.834043ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.215678  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 06:11:24.232982  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.814214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.253766  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.583411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.254028  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 06:11:24.272959  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.512138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.294142  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.963019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.294388  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 06:11:24.296350  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.296374  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.296408  109502 httplog.go:90] GET /healthz: (4.063914ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:24.313100  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.313145  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.313197  109502 httplog.go:90] GET /healthz: (2.323284ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.313477  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (2.15224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.333397  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.248452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.333667  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 06:11:24.353210  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (2.044054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.375239  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.07863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.376294  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 06:11:24.392780  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.63422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.393830  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.393856  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.393888  109502 httplog.go:90] GET /healthz: (1.493814ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:24.412501  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.412534  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.412572  109502 httplog.go:90] GET /healthz: (1.668928ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.413949  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.894645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.414190  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 06:11:24.432917  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.81929ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.454544  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.416711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.454809  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 06:11:24.472613  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.418762ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.494030  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.862982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.494569  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.494596  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.494625  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 06:11:24.494630  109502 httplog.go:90] GET /healthz: (1.224343ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:24.512571  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.566643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.513141  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.513172  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.513442  109502 httplog.go:90] GET /healthz: (2.566641ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.535151  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.986728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.535753  109502 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 06:11:24.552481  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.404554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.556315  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.081759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.573726  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.620195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.573993  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 06:11:24.592757  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.623925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.595889  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.595915  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.595953  109502 httplog.go:90] GET /healthz: (3.608124ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:24.596305  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.128052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.612832  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.612880  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.612921  109502 httplog.go:90] GET /healthz: (2.070003ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.613553  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.236675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.614404  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 06:11:24.632545  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.415634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.634225  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.18836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.653281  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.103872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.654086  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 06:11:24.672702  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.574866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.675251  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.944843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.693980  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.867119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.694949  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.694979  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.695015  109502 httplog.go:90] GET /healthz: (1.631275ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:24.695246  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 06:11:24.712969  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.713008  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.713064  109502 httplog.go:90] GET /healthz: (2.198323ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.713393  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.035926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.715378  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.491981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.734590  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.405601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.734907  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 06:11:24.752636  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.478855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.754565  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.514354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.773856  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.685215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.778880  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 06:11:24.793215  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.054432ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.794195  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.794223  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.794267  109502 httplog.go:90] GET /healthz: (1.527954ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:24.797808  109502 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.351947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.811838  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.811867  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.811915  109502 httplog.go:90] GET /healthz: (1.139771ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.814932  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.163813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.815284  109502 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 06:11:24.832730  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.582083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.835339  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.380368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.853650  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.37751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.854140  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 06:11:24.872477  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.339147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.874734  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.289731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.893535  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.373615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.896019  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.896163  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.896419  109502 httplog.go:90] GET /healthz: (2.551808ms) 0 [Go-http-client/1.1 127.0.0.1:60116]
I0814 06:11:24.898074  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 06:11:24.912219  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:24.912259  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:24.912296  109502 httplog.go:90] GET /healthz: (1.359638ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:24.913822  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.128713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.915550  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.359294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.934116  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.970827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.934390  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 06:11:24.952640  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.548867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.954557  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.296479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.974047  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.434323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:24.974307  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 06:11:25.001478  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:25.001508  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:25.001549  109502 httplog.go:90] GET /healthz: (1.099216ms) 0 [Go-http-client/1.1 127.0.0.1:60034]
I0814 06:11:25.002680  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (11.176599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:25.004915  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.545711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:25.011970  109502 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 06:11:25.012003  109502 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 06:11:25.012031  109502 httplog.go:90] GET /healthz: (1.282312ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:25.012970  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.709217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.013290  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 06:11:25.032573  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.419739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.034903  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.369602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.054901  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.599326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.055212  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 06:11:25.072826  109502 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.628373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.074946  109502 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.689426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.094836  109502 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.611713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.095062  109502 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 06:11:25.098147  109502 httplog.go:90] GET /healthz: (3.689468ms) 200 [Go-http-client/1.1 127.0.0.1:60116]
W0814 06:11:25.099111  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099138  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099161  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099172  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099186  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099196  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099214  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099227  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099236  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099318  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 06:11:25.099330  109502 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 06:11:25.099354  109502 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 06:11:25.099364  109502 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 06:11:25.099955  109502 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.099982  109502 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.100353  109502 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.100365  109502 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.100636  109502 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.100648  109502 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.100992  109502 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101010  109502 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101132  109502 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101148  109502 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101516  109502 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101530  109502 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101562  109502 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101578  109502 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101939  109502 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.101955  109502 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.103484  109502 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (752.213µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60282]
I0814 06:11:25.103609  109502 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (433.307µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:11:25.104076  109502 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (486.034µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60274]
I0814 06:11:25.104577  109502 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (397.157µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60276]
I0814 06:11:25.105068  109502 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (385.124µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60278]
I0814 06:11:25.105386  109502 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (1.668598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:25.105592  109502 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (422.992µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60280]
I0814 06:11:25.106737  109502 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=28494 labels= fields= timeout=8m14s
I0814 06:11:25.106807  109502 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=28494 labels= fields= timeout=7m26s
I0814 06:11:25.107190  109502 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=28494 labels= fields= timeout=5m17s
I0814 06:11:25.107231  109502 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=28494 labels= fields= timeout=5m50s
I0814 06:11:25.107601  109502 get.go:250] Starting watch for /api/v1/services, rv=28494 labels= fields= timeout=6m56s
I0814 06:11:25.107799  109502 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (446.49µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:11:25.108422  109502 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=28494 labels= fields= timeout=7m2s
I0814 06:11:25.108431  109502 get.go:250] Starting watch for /api/v1/pods, rv=28494 labels= fields= timeout=8m49s
I0814 06:11:25.108853  109502 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.108868  109502 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.108855  109502 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.108897  109502 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.109414  109502 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (380.261µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60282]
I0814 06:11:25.109763  109502 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (462.616µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60288]
I0814 06:11:25.110325  109502 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28494 labels= fields= timeout=5m52s
I0814 06:11:25.111524  109502 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=28494 labels= fields= timeout=9m32s
I0814 06:11:25.113303  109502 httplog.go:90] GET /healthz: (2.404697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60290]
I0814 06:11:25.114028  109502 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28494 labels= fields= timeout=8m57s
I0814 06:11:25.114972  109502 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.114989  109502 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 06:11:25.115717  109502 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (452.488µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60296]
I0814 06:11:25.116360  109502 httplog.go:90] GET /api/v1/namespaces/default: (2.588792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:25.117212  109502 get.go:250] Starting watch for /api/v1/nodes, rv=28494 labels= fields= timeout=5m56s
I0814 06:11:25.118276  109502 httplog.go:90] POST /api/v1/namespaces: (1.589252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:25.119949  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (990.729µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:25.125815  109502 httplog.go:90] POST /api/v1/namespaces/default/services: (5.480619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:25.127606  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.26884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:25.129944  109502 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.588042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:25.199845  109502 shared_informer.go:211] caches populated
I0814 06:11:25.300085  109502 shared_informer.go:211] caches populated
I0814 06:11:25.400423  109502 shared_informer.go:211] caches populated
I0814 06:11:25.500674  109502 shared_informer.go:211] caches populated
I0814 06:11:25.600859  109502 shared_informer.go:211] caches populated
I0814 06:11:25.701054  109502 shared_informer.go:211] caches populated
I0814 06:11:25.801268  109502 shared_informer.go:211] caches populated
I0814 06:11:25.901463  109502 shared_informer.go:211] caches populated
I0814 06:11:26.001677  109502 shared_informer.go:211] caches populated
I0814 06:11:26.101937  109502 shared_informer.go:211] caches populated
I0814 06:11:26.106058  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.106612  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.108067  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.108232  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.110018  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.110412  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.117005  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:26.202657  109502 shared_informer.go:211] caches populated
I0814 06:11:26.303038  109502 shared_informer.go:211] caches populated
I0814 06:11:26.306722  109502 httplog.go:90] POST /api/v1/nodes: (3.144424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:26.307396  109502 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 06:11:26.311160  109502 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods: (3.66637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:26.311508  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/waiting-pod
I0814 06:11:26.311521  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/waiting-pod
I0814 06:11:26.311661  109502 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/waiting-pod", node "test-node-0"
I0814 06:11:26.311675  109502 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 06:11:26.311714  109502 framework.go:562] waiting for 30s for pod "waiting-pod" at permit
I0814 06:11:26.317572  109502 factory.go:615] Attempting to bind signalling-pod to test-node-1
I0814 06:11:26.318093  109502 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 06:11:26.319157  109502 scheduler.go:447] Failed to bind pod: permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod
E0814 06:11:26.319176  109502 scheduler.go:449] scheduler cache ForgetPod failed: pod 2f2f94fb-0ff0-4e46-9582-6bd41eefd5f5 wasn't assumed so cannot be forgotten
E0814 06:11:26.319193  109502 scheduler.go:605] error binding pod: Post http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod/binding: dial tcp 127.0.0.1:34515: connect: connection refused
E0814 06:11:26.319215  109502 factory.go:566] Error scheduling permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod: Post http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod/binding: dial tcp 127.0.0.1:34515: connect: connection refused; retrying
I0814 06:11:26.319244  109502 factory.go:624] Updating pod condition for permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 06:11:26.319973  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
E0814 06:11:26.320035  109502 scheduler.go:280] Error updating the condition of the pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod: Put http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod/status: dial tcp 127.0.0.1:34515: connect: connection refused
I0814 06:11:26.320686  109502 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/waiting-pod/binding: (2.350112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:26.320905  109502 scheduler.go:614] pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
E0814 06:11:26.322091  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:34515/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/events: dial tcp 127.0.0.1:34515: connect: connection refused' (may retry after sleeping)
I0814 06:11:26.327304  109502 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/events: (6.152798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
E0814 06:11:26.520642  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
E0814 06:11:26.921243  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
I0814 06:11:27.106204  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:27.106704  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:27.108213  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:27.108514  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:27.110164  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:27.110549  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:27.117165  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:27.721983  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
I0814 06:11:28.106515  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:28.106763  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:28.108325  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:28.108649  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:28.110317  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:28.110698  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:28.117359  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.106802  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.106931  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.108508  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.108833  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.110413  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.113557  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:29.117460  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:29.322592  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
I0814 06:11:30.107018  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:30.107143  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:30.108683  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:30.108980  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:30.110559  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:30.113703  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:30.117658  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.107267  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.107346  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.108837  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.109132  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.110700  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.113930  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:31.117840  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.107450  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.107459  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.109020  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.109258  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.110827  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.114084  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:32.118035  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:32.523225  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
E0814 06:11:32.944148  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:46721/apis/events.k8s.io/v1beta1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/events: dial tcp 127.0.0.1:46721: connect: connection refused' (may retry after sleeping)
I0814 06:11:33.107633  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:33.107701  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:33.109304  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:33.109663  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:33.111016  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:33.114225  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:33.118211  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.107877  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.107971  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.109573  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.109847  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.111144  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.114366  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:34.118363  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:34.619173  109502 factory.go:599] Error getting pod permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/test-pod for retry: Get http://127.0.0.1:46721/api/v1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/pods/test-pod: dial tcp 127.0.0.1:46721: connect: connection refused; retrying...
I0814 06:11:35.108061  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.108080  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.109717  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.109968  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.111292  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.114611  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.116060  109502 httplog.go:90] GET /api/v1/namespaces/default: (2.151478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:35.117686  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.212914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:35.118648  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:35.119405  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.34663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:36.108239  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:36.108227  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:36.109838  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:36.110195  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:36.111459  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:36.114754  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:36.118827  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:36.765102  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:34515/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/events: dial tcp 127.0.0.1:34515: connect: connection refused' (may retry after sleeping)
I0814 06:11:37.108394  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:37.108496  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:37.109997  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:37.110344  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:37.111575  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:37.114914  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:37.118993  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.108601  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.108701  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.110687  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.110948  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.111738  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.115073  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:38.119200  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:38.923867  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
I0814 06:11:39.108844  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:39.108906  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:39.110826  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:39.111049  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:39.111822  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:39.115209  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:39.119893  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.109046  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.109114  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.111010  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.111241  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.111970  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.115322  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:40.120055  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.109422  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.109523  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.111125  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.111405  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.112114  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.115996  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:41.120214  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.109584  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.109587  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.111273  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.111504  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.112250  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.116528  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:42.120350  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:43.035649  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:46721/apis/events.k8s.io/v1beta1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/events: dial tcp 127.0.0.1:46721: connect: connection refused' (may retry after sleeping)
I0814 06:11:43.109793  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:43.109844  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:43.111464  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:43.111608  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:43.112446  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:43.116974  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:43.120510  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.109967  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.110069  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.111632  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.111848  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.112578  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.117080  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:44.120751  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.110110  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.110218  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.111762  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.112004  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.112761  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.115996  109502 httplog.go:90] GET /api/v1/namespaces/default: (2.040073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:45.117242  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:45.118049  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.628849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:45.119669  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.169041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:45.120935  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.110308  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.110408  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.112291  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.112488  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.112907  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.117391  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:46.121075  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.110431  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.110569  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.112369  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.112631  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.113311  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.117536  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:47.121401  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:47.323308  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:34515/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/events: dial tcp 127.0.0.1:34515: connect: connection refused' (may retry after sleeping)
I0814 06:11:48.110542  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:48.110691  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:48.112532  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:48.112788  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:48.113694  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:48.117670  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:48.122224  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.110788  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.110897  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.112736  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.112936  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.114473  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.117812  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:49.122842  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.110979  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.111015  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.114404  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.114467  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.115227  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.118586  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:50.123077  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.111170  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.111173  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.114531  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.114698  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.115455  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.118741  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:51.123258  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:51.724499  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
I0814 06:11:52.111356  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:52.111462  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:52.114721  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:52.114975  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:52.115560  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:52.118918  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:52.123448  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.111561  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.111663  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.115307  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.115343  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.115690  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.119007  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:53.123633  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 06:11:53.981195  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:46721/apis/events.k8s.io/v1beta1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/events: dial tcp 127.0.0.1:46721: connect: connection refused' (may retry after sleeping)
I0814 06:11:54.111730  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:54.111791  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:54.115455  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:54.115547  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:54.115848  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:54.119159  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:54.123808  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.111935  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.112042  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.115836  109502 httplog.go:90] GET /api/v1/namespaces/default: (1.719356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:55.116624  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.116939  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.116978  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.118074  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.856996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:55.119927  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:55.120915  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.530563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:55.123985  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.112076  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.112252  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.116995  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.117082  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.117101  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.120024  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.124718  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:56.316314  109502 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods: (3.280832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:56.317395  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:56.317420  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:56.317546  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:11:56.317610  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:11:56.320785  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.869333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:56.322718  109502 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/events: (4.257141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.325632  109502 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod/status: (6.019584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35134]
I0814 06:11:56.328036  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.582204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.328351  109502 generic_scheduler.go:1191] Node test-node-0 is a potential node for preemption.
I0814 06:11:56.330848  109502 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod/status: (1.963453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.334396  109502 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/waiting-pod: (3.197963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.337912  109502 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/events: (2.705272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.419116  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.882677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.521125  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.94369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.619016  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.947336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.718737  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.648429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.818696  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.653638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:56.918856  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.753843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:57.019832  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.726892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:57.112232  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.112377  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.117555  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.117651  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.117693  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:57.117703  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:57.117704  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.117873  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:11:57.117931  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:11:57.120161  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.124384  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (5.108034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:57.124968  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (7.967251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35132]
I0814 06:11:57.125118  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (6.288959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.125326  109502 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/events: (6.528251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35204]
I0814 06:11:57.125723  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:57.219095  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.752548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.320076  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.990519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.419266  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.508492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.518767  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.678058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.618673  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.598434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.718804  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.692884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:57.818995  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.855605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
E0814 06:11:57.907960  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:34515/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/events: dial tcp 127.0.0.1:34515: connect: connection refused' (may retry after sleeping)
I0814 06:11:57.918592  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.53646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.018821  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.710277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.109740  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:58.109787  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:58.109945  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:11:58.109992  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:11:58.112394  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.112512  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.113965  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.283147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.113969  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.348122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:58.115027  109502 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/events/preemptor-pod.15bab5425e38d18d: (3.310113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35326]
I0814 06:11:58.117747  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.117748  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.117932  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:58.117955  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:58.118071  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:11:58.118128  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:11:58.118903  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.904725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:58.119033  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.120363  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.120454  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.624427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35334]
I0814 06:11:58.120470  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.000487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.125822  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:58.219341  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.142885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.319349  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.120768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.418860  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.784092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.519014  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.872169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.618885  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.792476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.719376  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.322879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.818967  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.863261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:58.918994  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.920775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:59.019140  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.989949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:59.112594  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.112731  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.118408  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.118480  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.118627  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:59.118640  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:11:59.118793  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:11:59.118840  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:11:59.119346  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.121209  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (4.10893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:11:59.121642  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.123535  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.656964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35370]
I0814 06:11:59.123927  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.447239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.126118  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:11:59.219212  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.702299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.319062  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.798118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.418821  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.700646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.521537  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (4.420081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.620249  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.700491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.718869  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.838408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.819182  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.081934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:11:59.918797  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.729645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:12:00.019295  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.263264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:12:00.112757  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.113021  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.118570  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.118812  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.118920  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:00.118931  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:00.119075  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:00.119120  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:00.119461  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.121804  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.655336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.122098  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.528116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:00.122911  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.123431  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (6.330604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35202]
I0814 06:12:00.126272  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:00.218809  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.658667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
E0814 06:12:00.219648  109502 factory.go:599] Error getting pod permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/test-pod for retry: Get http://127.0.0.1:46721/api/v1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/pods/test-pod: dial tcp 127.0.0.1:46721: connect: connection refused; retrying...
I0814 06:12:00.320553  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.376089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.420130  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.037719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.518984  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.910816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.619385  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.244488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.722541  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (5.374988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.818816  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.712443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:00.919432  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.815144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.019034  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.008427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.112956  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.113129  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.118875  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.809182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.119531  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.119558  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.119581  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.119671  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:01.119683  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:01.119833  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:01.119876  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:01.121526  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.434684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:01.122686  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.58415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.123008  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.126399  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:01.218650  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.449962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.318894  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.669233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.418839  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.791436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.518883  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.796624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.618959  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.830522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.718683  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.600706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.820489  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.341409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:01.926399  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (8.834525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:02.018750  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.661358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:02.113124  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.113355  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.118647  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.591702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:02.119727  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.119754  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.119918  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.120032  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:02.120044  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:02.120186  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:02.120227  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:02.123550  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.06595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:02.124219  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.225172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.126558  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.126866  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:02.219120  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.015679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.319801  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.505098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.419969  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.449942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.518986  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.665674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.619566  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.480688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.718563  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.437079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.818674  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.540897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:02.918755  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.700071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.018886  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.657338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.115366  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.115478  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.118716  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.641268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.119869  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.119905  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.120060  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.120207  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:03.120221  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:03.120372  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:03.120419  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:03.122651  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.487928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:03.123495  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.86794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.126753  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.127023  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:03.218932  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.701473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.318903  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.809505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.418871  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.765257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.519129  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.025457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.619001  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.894252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.718932  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.83594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.819140  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.917582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:03.920274  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.845155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:04.018894  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.784748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:04.115558  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.115566  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.118643  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.59751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:04.120033  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.120035  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.120303  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.120419  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:04.120434  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:04.120564  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:04.120615  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:04.122401  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.486379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:04.122461  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.420901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.126928  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.127155  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:04.218995  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.835969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.319023  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.762225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.419092  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.989783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.519199  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.121926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
E0814 06:12:04.605551  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:46721/apis/events.k8s.io/v1beta1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/events: dial tcp 127.0.0.1:46721: connect: connection refused' (may retry after sleeping)
I0814 06:12:04.619131  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.920209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.718858  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.758451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.819032  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.969524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:04.919285  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.101958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.018858  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.702733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.115728  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.115742  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.115880  109502 httplog.go:90] GET /api/v1/namespaces/default: (1.679748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.117640  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.375866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.118837  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.791899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:05.119647  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.239796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.120186  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.120192  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.120502  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.120671  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:05.120687  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:05.120816  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:05.120854  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:05.122275  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.237253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.122414  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.251237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:05.127135  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.127292  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:05.219139  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.063657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.318847  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.775546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.420322  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.186357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.520653  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.548106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.618931  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.653077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.718720  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.663763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.819128  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.932676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:05.919080  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.038024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.018877  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.782768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.115919  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.115969  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.118953  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.866277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.120313  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.120348  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.120657  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.120819  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:06.120834  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:06.120955  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:06.121000  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:06.122895  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.516261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:06.122998  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.793589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.127308  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.127417  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:06.218791  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.73627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.318928  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.801197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.418940  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.87502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.520528  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.444525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.618850  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.749994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.719329  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.241956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.818654  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.568505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:06.919053  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.914202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:07.018682  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.568974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:07.116105  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.116166  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.118909  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.863624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:07.120452  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.120457  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.120829  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.120958  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:07.120979  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:07.121147  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:07.121201  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:07.123335  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.294329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:07.123683  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.961797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.127462  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.127572  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:07.219322  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.875264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.323007  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (5.875325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.418562  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.519783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.518544  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.450234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.621197  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (4.132117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.718720  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.641519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.818610  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.53997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:07.918620  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.511797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:08.018818  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.782036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:08.118764  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.711338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:08.120625  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.120659  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.120901  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.120921  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.120979  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.121141  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:08.121155  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:08.121317  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:08.121363  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:08.124247  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.641479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:08.125827  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (4.092732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.127642  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.127763  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:08.218957  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.81121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.323100  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.884744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.418444  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.362904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.519206  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.098264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.618713  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.638278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.718933  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.825676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.818742  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.702029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:08.919033  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.935753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.019042  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.92874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.118911  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.854974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.120791  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.120849  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.120988  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.120993  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.121149  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.121281  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:09.121296  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:09.121420  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:09.121474  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:09.124162  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.329646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:09.124509  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.224156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.127823  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.127941  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:09.218656  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.651921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.321224  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (4.085498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.421015  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.880662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.519411  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.237111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.618970  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.829042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.718739  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.661867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
E0814 06:12:09.816205  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:34515/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/events: dial tcp 127.0.0.1:34515: connect: connection refused' (may retry after sleeping)
I0814 06:12:09.818635  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.653515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:09.919546  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.482621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:10.019259  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.160658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:10.118628  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.501667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:10.120969  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.121010  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.121071  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.121162  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.121299  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.121414  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:10.121434  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:10.121584  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:10.121632  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:10.125324  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.028212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:10.125707  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.478415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.128010  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.128052  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:10.219918  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.759675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.319027  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.868889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.419248  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.065668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.521281  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (4.115141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.619014  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.904881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.719119  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.016887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.818916  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.777632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:10.919141  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.835936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:11.019326  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.976336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:11.119084  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.971509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:11.121158  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.121204  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.121247  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.121338  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.121717  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.121878  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:11.121901  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:11.122062  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:11.122123  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:11.124361  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.933899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:11.125173  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.680996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.128157  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.128179  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:11.219264  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.154954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.318988  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.920133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.419035  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.95933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.519361  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.186036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.618991  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.83161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.718885  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.774404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.819071  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.746455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:11.918977  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.928607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:12.018850  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.772535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:12.119044  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.87038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:12.121331  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.121451  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.121468  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.121656  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.121897  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.122301  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:12.122501  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:12.122794  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:12.122923  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:12.124952  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.666405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:12.128261  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.128510  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:12.129570  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.302484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.218897  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.740822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.319255  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.071134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.418965  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.870918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.518895  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.780835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.618820  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.768197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.718970  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.883885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.818943  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.796296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:12.918976  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.882004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.018938  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.867252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.119203  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.112835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.121485  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.121573  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.121588  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.121820  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.122154  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.122265  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:13.122278  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:13.122422  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:13.122460  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:13.125267  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.688436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.125569  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.452772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:13.128373  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.128741  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:13.219038  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.921326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.319130  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.986824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.421204  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.583949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.518941  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.843537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.618556  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.483952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.718884  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.733475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.819177  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.110538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:13.918727  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.629183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:14.019068  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.91978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:14.118833  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.631434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:14.121683  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.121837  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.121859  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.122018  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.122248  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.122425  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:14.122516  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:14.122700  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:14.122864  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:14.124911  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.693703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:14.125556  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.672566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.128529  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.128866  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:14.219041  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.894063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.319093  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.920004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.418555  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.515652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.518714  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.611147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.618902  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.777896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.718730  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.642998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.819131  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.992118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:14.918991  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.840041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.018939  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.875185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.116427  109502 httplog.go:90] GET /api/v1/namespaces/default: (2.055745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.118183  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.27186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:15.119021  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.581759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.120372  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (945.678µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.122047  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.122162  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.122241  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.122272  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.122440  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.122551  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:15.122564  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:15.122703  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:15.122745  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:15.124278  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.14499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:15.124680  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.189254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.128753  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.129011  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:15.218938  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.8906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.318952  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.753574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.418853  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.694188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.519885  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.740794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.619085  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.006771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.719019  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.932537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.818768  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.68784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:15.919198  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.149553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.018874  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.782623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.119223  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.121589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.122279  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.122328  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.122426  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.122449  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.122922  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.123053  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:16.123066  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:16.123204  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:16.123244  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:16.127328  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.37358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:16.127887  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.508744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.128900  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.129128  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:16.219019  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.924409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.319018  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.874497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.419020  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.870899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.519096  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.751635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.618862  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.820621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.719116  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.974411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.819107  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.992115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:16.918881  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.72983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:17.018790  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.609454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
E0814 06:12:17.049035  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:46721/apis/events.k8s.io/v1beta1/namespaces/permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/events: dial tcp 127.0.0.1:46721: connect: connection refused' (may retry after sleeping)
I0814 06:12:17.118959  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.878303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:17.122585  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.122761  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.122861  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.122963  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.123048  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.123250  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:17.123323  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:17.123528  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:17.123666  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:17.125648  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.577204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:17.126667  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.319835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.129270  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.129444  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:17.219039  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.896202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.318957  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.82954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
E0814 06:12:17.325065  109502 factory.go:599] Error getting pod permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/signalling-pod for retry: Get http://127.0.0.1:34515/api/v1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/pods/signalling-pod: dial tcp 127.0.0.1:34515: connect: connection refused; retrying...
I0814 06:12:17.419629  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.327388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.518840  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.698049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.619525  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.372977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.719104  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.066755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.818912  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.818932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:17.919097  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.992857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:18.018481  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.439279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:18.118579  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.457509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:18.122728  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.123005  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.123106  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.123123  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.123235  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.123417  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:18.123532  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:18.123794  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:18.123910  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:18.125740  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.599757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:18.126556  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.407252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.129458  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.129626  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:18.218949  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.825732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.319091  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.040067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.419089  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.918873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.518962  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.877803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.619248  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.156519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.719365  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.166189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.818894  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.805594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:18.919132  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.029615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.018824  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.704736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.118723  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.572273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.122906  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.123119  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.123223  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.123267  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.123416  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.129638  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.129826  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:19.219052  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.882718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.319037  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.952995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.418963  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.881513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.519154  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.088017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.622106  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (5.055535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.718753  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.716349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.818583  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.507249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:19.918793  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.698233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
E0814 06:12:20.012277  109502 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:34515/apis/events.k8s.io/v1beta1/namespaces/permit-pluginaaf3e309-fef8-4d2c-b21f-573c6f7af199/events: dial tcp 127.0.0.1:34515: connect: connection refused' (may retry after sleeping)
I0814 06:12:20.019130  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.058066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:20.118658  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.602205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:20.123274  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.123437  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.123459  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.123510  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.123566  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.123711  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:20.123725  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:20.123907  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:20.123954  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:20.125644  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.436372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:20.125887  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.670384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.130056  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.130231  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:20.218711  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.696089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.318992  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.877378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.418744  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.688837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.518660  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.604069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.618760  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.683123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.718617  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.592549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.818717  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.612643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:20.918605  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.527812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:21.018841  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.763136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:21.118828  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.753074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:21.123459  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.123593  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.123621  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.123621  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.123739  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.123908  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:21.123931  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:21.124059  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:21.124115  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:21.126377  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.87489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.126380  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.006876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:21.130197  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.130402  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:21.219305  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.134478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.318983  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.895866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.419015  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.920424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.518875  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.765993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.619039  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.670692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.718970  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.759178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.818948  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.845412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.848397  109502 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.352624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.850551  109502 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.653943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.858728  109502 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.330881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:21.928234  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (11.192518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.018831  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.7212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.119586  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.309097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.123703  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.123703  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.123758  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.123881  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.123960  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.123996  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:22.124006  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:22.124166  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:22.124214  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:22.126491  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.021129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:22.127707  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.147863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.130332  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.130673  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:22.219026  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.867494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.318922  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.835593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.419035  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.913914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.518847  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.687473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.619022  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.896829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.718744  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.663797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.818686  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.613854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:22.919014  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.919968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:23.018700  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.609195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:23.123840  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.123899  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.123914  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.124000  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:23.124000  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.124017  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:23.124193  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:23.124251  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:23.124874  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (6.348395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:23.125195  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.130489  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.130978  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:23.134468  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (9.934667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60294]
I0814 06:12:23.134895  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (10.011567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
E0814 06:12:23.135190  109502 factory.go:590] pod is already present in the activeQ
I0814 06:12:23.135400  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:23.135417  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:23.135566  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:23.135610  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:23.138093  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.188711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:23.138179  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.346849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.219179  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.015086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.318940  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.855757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.420366  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.245285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.520451  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.380297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.618747  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.635958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.718946  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.819763  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.693884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:23.918987  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.840095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:24.018922  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.921639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:24.118585  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.550992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:24.123977  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.124080  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.124096  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.124214  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.124351  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:24.124367  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:24.124497  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:24.124534  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:24.125937  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.127108  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.086152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.127117  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.579147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:24.130647  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.131137  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:24.218829  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.775084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.318856  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.777825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.420286  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.200977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.519086  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.012271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.619921  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.788663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.718961  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.904133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.830330  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (5.618943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:24.919550  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.115947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.019458  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.305518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.116219  109502 httplog.go:90] GET /api/v1/namespaces/default: (1.335236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.118935  109502 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.157208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.119570  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.130247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:25.121333  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.889609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.124169  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.124205  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.124370  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.124416  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:25.124533  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:25.124490  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.124823  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:25.124935  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:25.126098  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.128045  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.706311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
E0814 06:12:25.128383  109502 factory.go:590] pod is already present in the activeQ
I0814 06:12:25.128712  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.503539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.129086  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:25.129169  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:25.129331  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:25.129420  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:25.130932  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.096842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:25.131022  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.131280  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:25.132058  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.134368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.226704  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (9.504682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.318886  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.689358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.418972  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.921632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.520571  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (3.522829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.619895  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.348088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.719459  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.378505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.818970  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.905029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:25.918742  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.649506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:26.018647  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.542327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:26.120066  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.745206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:26.124348  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.124389  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.124617  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.124724  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:26.124738  109502 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:26.124841  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.124914  109502 factory.go:550] Unable to schedule preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 06:12:26.124966  109502 factory.go:624] Updating pod condition for preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 06:12:26.126324  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.127217  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.626419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:26.127298  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.830236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.131166  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.131862  109502 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 06:12:26.218807  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.645061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.318896  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.845554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.321739  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (2.404108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.323589  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/waiting-pod: (1.4053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.332819  109502 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/waiting-pod: (8.747265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.337833  109502 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:26.337882  109502 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/preemptor-pod
I0814 06:12:26.339875  109502 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/events: (1.668128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0814 06:12:26.341041  109502 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (7.719823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.347520  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/waiting-pod: (4.686944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.350127  109502 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-pluginf6f05c63-70eb-4762-bb5d-a7c7a2cc01cc/pods/preemptor-pod: (1.098994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
E0814 06:12:26.351089  109502 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 06:12:26.351537  109502 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=28494&timeout=5m17s&timeoutSeconds=317&watch=true: (1m1.24461414s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60034]
I0814 06:12:26.351726  109502 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28494&timeout=7m26s&timeoutSeconds=446&watch=true: (1m1.245211999s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60274]
I0814 06:12:26.351866  109502 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28494&timeout=8m14s&timeoutSeconds=494&watch=true: (1m1.245460476s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60286]
I0814 06:12:26.351993  109502 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28494&timeout=7m2s&timeoutSeconds=422&watch=true: (1m1.243828935s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60116]
I0814 06:12:26.352129  109502 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28494&timeout=5m52s&timeoutSeconds=352&watch=true: (1m1.242411184s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60282]
I0814 06:12:26.352237  109502 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=28494&timeout=9m32s&timeoutSeconds=572&watch=true: (1m1.240995049s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60288]
I0814 06:12:26.352342  109502 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28494&timeout=8m57s&timeoutSeconds=537&watch=true: (1m1.238644231s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60290]
I0814 06:12:26.352455  109502 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=28494&timeout=6m56s&timeoutSeconds=416&watch=true: (1m1.245085469s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60278]
I0814 06:12:26.352568  109502 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28494&timeout=5m56s&timeoutSeconds=356&watch=true: (1m1.235706001s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60296]
I0814 06:12:26.352668  109502 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28494&timeout=8m49s&timeoutSeconds=529&watch=true: (1m1.24452414s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60280]
I0814 06:12:26.353447  109502 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28494&timeout=5m50s&timeoutSeconds=350&watch=true: (1m1.246506514s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60276]
I0814 06:12:26.366369  109502 httplog.go:90] DELETE /api/v1/nodes: (13.583264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.366583  109502 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 06:12:26.369659  109502 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.83759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
I0814 06:12:26.375164  109502 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (5.051939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38028]
--- FAIL: TestPreemptWithPermitPlugin (64.73s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-060355.xml

Find permit-plugince17b60c-e64e-4bae-a2d2-c51cc4c3cd4b/test-pod mentions in log files | View test history on testgrid


Show 2470 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 799 lines ...
W0814 05:58:12.228] W0814 05:58:12.211332   53051 controllermanager.go:527] Skipping "route"
W0814 05:58:12.228] I0814 05:58:12.212263   53051 controllermanager.go:535] Started "deployment"
W0814 05:58:12.228] I0814 05:58:12.212509   53051 deployment_controller.go:152] Starting deployment controller
W0814 05:58:12.228] I0814 05:58:12.212745   53051 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0814 05:58:12.228] I0814 05:58:12.212825   53051 controllermanager.go:535] Started "csrcleaner"
W0814 05:58:12.229] I0814 05:58:12.213575   53051 node_lifecycle_controller.go:77] Sending events to api server
W0814 05:58:12.229] E0814 05:58:12.213855   53051 core.go:175] failed to start cloud node lifecycle controller: no cloud provider provided
W0814 05:58:12.229] I0814 05:58:12.212835   53051 cleaner.go:81] Starting CSR cleaner controller
W0814 05:58:12.229] W0814 05:58:12.214033   53051 controllermanager.go:527] Skipping "cloud-node-lifecycle"
W0814 05:58:12.229] I0814 05:58:12.215243   53051 controllermanager.go:535] Started "persistentvolume-binder"
W0814 05:58:12.230] I0814 05:58:12.215978   53051 controllermanager.go:535] Started "endpoint"
W0814 05:58:12.230] I0814 05:58:12.216151   53051 endpoints_controller.go:170] Starting endpoint controller
W0814 05:58:12.230] I0814 05:58:12.216257   53051 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
W0814 05:58:12.230] I0814 05:58:12.215406   53051 pv_controller_base.go:282] Starting persistent volume controller
W0814 05:58:12.230] I0814 05:58:12.216291   53051 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
W0814 05:58:12.230] I0814 05:58:12.216943   53051 controllermanager.go:535] Started "cronjob"
W0814 05:58:12.231] I0814 05:58:12.217065   53051 cronjob_controller.go:96] Starting CronJob Manager
W0814 05:58:12.231] W0814 05:58:12.218724   53051 controllermanager.go:527] Skipping "csrsigning"
W0814 05:58:12.231] W0814 05:58:12.218928   53051 controllermanager.go:514] "bootstrapsigner" is disabled
W0814 05:58:12.231] E0814 05:58:12.219761   53051 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0814 05:58:12.231] W0814 05:58:12.220004   53051 controllermanager.go:527] Skipping "service"
W0814 05:58:12.247] W0814 05:58:12.247056   53051 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0814 05:58:12.332] I0814 05:58:12.332227   53051 controller_utils.go:1036] Caches are synced for namespace controller
W0814 05:58:12.336] I0814 05:58:12.336435   53051 controller_utils.go:1036] Caches are synced for TTL controller
W0814 05:58:12.346] I0814 05:58:12.345599   53051 controller_utils.go:1036] Caches are synced for certificate controller
W0814 05:58:12.346] I0814 05:58:12.345612   53051 controller_utils.go:1036] Caches are synced for PV protection controller
W0814 05:58:12.349] I0814 05:58:12.349263   53051 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0814 05:58:12.546] I0814 05:58:12.545591   53051 controller_utils.go:1036] Caches are synced for expand controller
... skipping 107 lines ...
I0814 05:58:16.925] +++ command: run_RESTMapper_evaluation_tests
I0814 05:58:16.937] +++ [0814 05:58:16] Creating namespace namespace-1565762296-13339
I0814 05:58:17.032] namespace/namespace-1565762296-13339 created
W0814 05:58:17.133] /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 143: 53701 Terminated              kubectl proxy --port=0 --www=. --api-prefix="$1" > "${PROXY_PORT_FILE}" 2>&1
I0814 05:58:17.234] Context "test" modified.
I0814 05:58:17.234] +++ [0814 05:58:17] Testing RESTMapper
I0814 05:58:17.294] +++ [0814 05:58:17] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0814 05:58:17.313] +++ exit code: 0
I0814 05:58:17.455] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0814 05:58:17.456] bindings                                                                      true         Binding
I0814 05:58:17.456] componentstatuses                 cs                                          false        ComponentStatus
I0814 05:58:17.456] configmaps                        cm                                          true         ConfigMap
I0814 05:58:17.456] endpoints                         ep                                          true         Endpoints
... skipping 643 lines ...
I0814 05:58:40.660] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:58:40.870] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:58:40.982] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:58:41.193] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:58:41.310] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:58:41.415] (Bpod "valid-pod" force deleted
W0814 05:58:41.516] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0814 05:58:41.534] error: setting 'all' parameter but found a non empty selector. 
W0814 05:58:41.535] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 05:58:41.635] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:58:41.667] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0814 05:58:41.762] (Bnamespace/test-kubectl-describe-pod created
I0814 05:58:41.880] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0814 05:58:41.995] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0814 05:58:43.188] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0814 05:58:43.311] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0814 05:58:43.413] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0814 05:58:43.526] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0814 05:58:43.738] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:58:43.968] (Bpod/env-test-pod created
W0814 05:58:44.069] error: min-available and max-unavailable cannot be both specified
I0814 05:58:44.177] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0814 05:58:44.178] Name:         env-test-pod
I0814 05:58:44.178] Namespace:    test-kubectl-describe-pod
I0814 05:58:44.178] Priority:     0
I0814 05:58:44.178] Node:         <none>
I0814 05:58:44.178] Labels:       <none>
... skipping 173 lines ...
I0814 05:59:00.318] (Bpod/valid-pod patched
I0814 05:59:00.442] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0814 05:59:00.546] (Bpod/valid-pod patched
I0814 05:59:00.764] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0814 05:59:00.973] (Bpod/valid-pod patched
I0814 05:59:01.092] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 05:59:01.315] (B+++ [0814 05:59:01] "kubectl patch with resourceVersion 503" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0814 05:59:01.616] pod "valid-pod" deleted
I0814 05:59:01.631] pod/valid-pod replaced
I0814 05:59:01.760] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0814 05:59:01.949] (BSuccessful
I0814 05:59:01.950] message:error: --grace-period must have --force specified
I0814 05:59:01.950] has:\-\-grace-period must have \-\-force specified
I0814 05:59:02.154] Successful
I0814 05:59:02.154] message:error: --timeout must have --force specified
I0814 05:59:02.154] has:\-\-timeout must have \-\-force specified
W0814 05:59:02.357] W0814 05:59:02.356698   53051 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0814 05:59:02.458] node/node-v1-test created
I0814 05:59:02.563] node/node-v1-test replaced
I0814 05:59:02.709] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0814 05:59:02.823] (Bnode "node-v1-test" deleted
I0814 05:59:02.953] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0814 05:59:03.300] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 26 lines ...
I0814 05:59:04.928]     name: kubernetes-pause
I0814 05:59:04.928] has:localonlyvalue
I0814 05:59:04.966] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 05:59:05.185] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 05:59:05.300] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0814 05:59:05.415] (Bpod/valid-pod labeled
W0814 05:59:05.516] error: 'name' already has a value (valid-pod), and --overwrite is false
I0814 05:59:05.616] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0814 05:59:05.652] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:59:05.757] (Bpod "valid-pod" force deleted
W0814 05:59:05.858] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 05:59:05.959] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:05.960] (B+++ [0814 05:59:05] Creating namespace namespace-1565762345-18269
... skipping 82 lines ...
I0814 05:59:14.905] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0814 05:59:14.908] +++ working dir: /go/src/k8s.io/kubernetes
I0814 05:59:14.910] +++ command: run_kubectl_create_error_tests
I0814 05:59:14.923] +++ [0814 05:59:14] Creating namespace namespace-1565762354-10214
I0814 05:59:15.045] namespace/namespace-1565762354-10214 created
I0814 05:59:15.141] Context "test" modified.
I0814 05:59:15.148] +++ [0814 05:59:15] Testing kubectl create with error
W0814 05:59:15.249] Error: must specify one of -f and -k
W0814 05:59:15.250] 
W0814 05:59:15.250] Create a resource from a file or from stdin.
W0814 05:59:15.251] 
W0814 05:59:15.251]  JSON and YAML formats are accepted.
W0814 05:59:15.252] 
W0814 05:59:15.252] Examples:
... skipping 41 lines ...
W0814 05:59:15.271] 
W0814 05:59:15.271] Usage:
W0814 05:59:15.272]   kubectl create -f FILENAME [options]
W0814 05:59:15.272] 
W0814 05:59:15.273] Use "kubectl <command> --help" for more information about a given command.
W0814 05:59:15.273] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0814 05:59:15.451] +++ [0814 05:59:15] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0814 05:59:15.552] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 05:59:15.553] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 05:59:15.677] +++ exit code: 0
I0814 05:59:15.709] Recording: run_kubectl_apply_tests
I0814 05:59:15.710] Running command: run_kubectl_apply_tests
I0814 05:59:15.731] 
... skipping 20 lines ...
W0814 05:59:18.380] I0814 05:59:18.380056   49577 client.go:354] scheme "" not registered, fallback to default scheme
W0814 05:59:18.381] I0814 05:59:18.380897   49577 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0814 05:59:18.382] I0814 05:59:18.381682   49577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 05:59:18.384] I0814 05:59:18.383982   49577 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 05:59:18.387] I0814 05:59:18.386866   49577 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0814 05:59:18.487] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0814 05:59:18.588] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0814 05:59:18.689] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 05:59:18.690] +++ exit code: 0
I0814 05:59:18.704] Recording: run_kubectl_run_tests
I0814 05:59:18.705] Running command: run_kubectl_run_tests
I0814 05:59:18.725] 
I0814 05:59:18.729] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
I0814 05:59:22.059] Context "test" modified.
I0814 05:59:22.067] +++ [0814 05:59:22] Testing kubectl create filter
I0814 05:59:22.180] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:22.398] (Bpod/selector-test-pod created
I0814 05:59:22.531] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0814 05:59:22.645] (BSuccessful
I0814 05:59:22.646] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0814 05:59:22.646] has:pods "selector-test-pod-dont-apply" not found
I0814 05:59:22.744] pod "selector-test-pod" deleted
I0814 05:59:22.763] +++ exit code: 0
I0814 05:59:22.798] Recording: run_kubectl_apply_deployments_tests
I0814 05:59:22.798] Running command: run_kubectl_apply_deployments_tests
I0814 05:59:22.818] 
... skipping 29 lines ...
W0814 05:59:25.685] I0814 05:59:25.589573   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762362-27063", Name:"nginx", UID:"45d31c5b-0b3e-44de-8a00-e404c10e4096", APIVersion:"apps/v1", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7dbc4d9f to 3
W0814 05:59:25.686] I0814 05:59:25.599891   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-7dbc4d9f", UID:"3dbcd7c1-66bc-4eed-aff5-eebf13b6314f", APIVersion:"apps/v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-4l758
W0814 05:59:25.686] I0814 05:59:25.604049   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-7dbc4d9f", UID:"3dbcd7c1-66bc-4eed-aff5-eebf13b6314f", APIVersion:"apps/v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-bhvvb
W0814 05:59:25.687] I0814 05:59:25.605732   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-7dbc4d9f", UID:"3dbcd7c1-66bc-4eed-aff5-eebf13b6314f", APIVersion:"apps/v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7dbc4d9f-l27zl
I0814 05:59:25.788] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0814 05:59:30.018] (BSuccessful
I0814 05:59:30.020] message:Error from server (Conflict): error when applying patch:
I0814 05:59:30.021] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565762362-27063\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0814 05:59:30.021] to:
I0814 05:59:30.021] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0814 05:59:30.022] Name: "nginx", Namespace: "namespace-1565762362-27063"
I0814 05:59:30.026] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1565762362-27063\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-08-14T05:59:25Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]] "k:{\"type\":\"Progressing\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-08-14T05:59:25Z"] map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map["f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-08-14T05:59:25Z"]] "name":"nginx" "namespace":"namespace-1565762362-27063" "resourceVersion":"600" "selfLink":"/apis/apps/v1/namespaces/namespace-1565762362-27063/deployments/nginx" "uid":"45d31c5b-0b3e-44de-8a00-e404c10e4096"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-08-14T05:59:25Z" "lastUpdateTime":"2019-08-14T05:59:25Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-08-14T05:59:25Z" "lastUpdateTime":"2019-08-14T05:59:25Z" "message":"ReplicaSet \"nginx-7dbc4d9f\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0814 05:59:30.026] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0814 05:59:30.027] has:Error from server (Conflict)
W0814 05:59:30.127] I0814 05:59:29.153224   53051 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1565762351-7696
I0814 05:59:35.425] deployment.apps/nginx configured
W0814 05:59:35.526] I0814 05:59:35.430637   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762362-27063", Name:"nginx", UID:"3531873d-e007-4f97-be74-0771fd544202", APIVersion:"apps/v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 05:59:35.527] I0814 05:59:35.437241   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-594f77b9f6", UID:"fa0645cb-6d7a-43c9-adad-7e0b6b69bf7b", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-zrgkt
W0814 05:59:35.528] I0814 05:59:35.443215   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-594f77b9f6", UID:"fa0645cb-6d7a-43c9-adad-7e0b6b69bf7b", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-n5m55
W0814 05:59:35.528] I0814 05:59:35.447458   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-594f77b9f6", UID:"fa0645cb-6d7a-43c9-adad-7e0b6b69bf7b", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-f52tp
I0814 05:59:35.628] Successful
I0814 05:59:35.629] message:        "name": "nginx2"
I0814 05:59:35.629]           "name": "nginx2"
I0814 05:59:35.629] has:"name": "nginx2"
W0814 05:59:39.937] E0814 05:59:39.936701   53051 replica_set.go:450] Sync "namespace-1565762362-27063/nginx-594f77b9f6" failed with replicasets.apps "nginx-594f77b9f6" not found
W0814 05:59:40.895] I0814 05:59:40.894186   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762362-27063", Name:"nginx", UID:"e6dcd229-eba9-4d1e-80ab-9345850699bd", APIVersion:"apps/v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-594f77b9f6 to 3
W0814 05:59:40.900] I0814 05:59:40.900010   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-594f77b9f6", UID:"e735c7d0-1add-4eaa-84ba-e06ba14ed017", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-hxz6q
W0814 05:59:40.904] I0814 05:59:40.904025   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-594f77b9f6", UID:"e735c7d0-1add-4eaa-84ba-e06ba14ed017", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-dmld6
W0814 05:59:40.907] I0814 05:59:40.906358   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762362-27063", Name:"nginx-594f77b9f6", UID:"e735c7d0-1add-4eaa-84ba-e06ba14ed017", APIVersion:"apps/v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-594f77b9f6-7vv5m
I0814 05:59:41.007] Successful
I0814 05:59:41.008] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 183 lines ...
I0814 05:59:43.310] +++ [0814 05:59:43] Creating namespace namespace-1565762383-28040
I0814 05:59:43.429] namespace/namespace-1565762383-28040 created
I0814 05:59:43.522] Context "test" modified.
I0814 05:59:43.530] +++ [0814 05:59:43] Testing kubectl get
I0814 05:59:43.637] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:43.738] (BSuccessful
I0814 05:59:43.739] message:Error from server (NotFound): pods "abc" not found
I0814 05:59:43.740] has:pods "abc" not found
I0814 05:59:43.852] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:43.958] (BSuccessful
I0814 05:59:43.959] message:Error from server (NotFound): pods "abc" not found
I0814 05:59:43.959] has:pods "abc" not found
I0814 05:59:44.091] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:44.212] (BSuccessful
I0814 05:59:44.213] message:{
I0814 05:59:44.213]     "apiVersion": "v1",
I0814 05:59:44.214]     "items": [],
... skipping 23 lines ...
I0814 05:59:44.674] has not:No resources found
I0814 05:59:44.781] Successful
I0814 05:59:44.782] message:NAME
I0814 05:59:44.783] has not:No resources found
I0814 05:59:44.895] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:45.046] (BSuccessful
I0814 05:59:45.047] message:error: the server doesn't have a resource type "foobar"
I0814 05:59:45.048] has not:No resources found
I0814 05:59:45.153] Successful
I0814 05:59:45.154] message:No resources found in namespace-1565762383-28040 namespace.
I0814 05:59:45.154] has:No resources found
I0814 05:59:45.270] Successful
I0814 05:59:45.271] message:
I0814 05:59:45.272] has not:No resources found
I0814 05:59:45.387] Successful
I0814 05:59:45.389] message:No resources found in namespace-1565762383-28040 namespace.
I0814 05:59:45.389] has:No resources found
I0814 05:59:45.531] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:45.641] (BSuccessful
I0814 05:59:45.642] message:Error from server (NotFound): pods "abc" not found
I0814 05:59:45.642] has:pods "abc" not found
I0814 05:59:45.643] FAIL!
I0814 05:59:45.643] message:Error from server (NotFound): pods "abc" not found
I0814 05:59:45.644] has not:List
I0814 05:59:45.644] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0814 05:59:45.786] Successful
I0814 05:59:45.787] message:I0814 05:59:45.724159   63366 loader.go:375] Config loaded from file:  /tmp/tmp.HGEMZR7EeV/.kube/config
I0814 05:59:45.787] I0814 05:59:45.725700   63366 round_trippers.go:471] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0814 05:59:45.787] I0814 05:59:45.761285   63366 round_trippers.go:471] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 660 lines ...
I0814 05:59:51.738] Successful
I0814 05:59:51.739] message:NAME    DATA   AGE
I0814 05:59:51.739] one     0      0s
I0814 05:59:51.739] three   0      0s
I0814 05:59:51.739] two     0      0s
I0814 05:59:51.739] STATUS    REASON          MESSAGE
I0814 05:59:51.739] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 05:59:51.740] has not:watch is only supported on individual resources
I0814 05:59:52.863] Successful
I0814 05:59:52.863] message:STATUS    REASON          MESSAGE
I0814 05:59:52.863] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 05:59:52.863] has not:watch is only supported on individual resources
I0814 05:59:52.869] +++ [0814 05:59:52] Creating namespace namespace-1565762392-6636
I0814 05:59:52.963] namespace/namespace-1565762392-6636 created
I0814 05:59:53.052] Context "test" modified.
I0814 05:59:53.166] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:53.357] (Bpod/valid-pod created
... skipping 104 lines ...
I0814 05:59:53.482] }
I0814 05:59:53.595] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 05:59:53.910] (B<no value>Successful
I0814 05:59:53.911] message:valid-pod:
I0814 05:59:53.911] has:valid-pod:
I0814 05:59:54.015] Successful
I0814 05:59:54.016] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0814 05:59:54.016] 	template was:
I0814 05:59:54.016] 		{.missing}
I0814 05:59:54.016] 	object given to jsonpath engine was:
I0814 05:59:54.019] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-08-14T05:59:53Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-08-14T05:59:53Z"}}, "name":"valid-pod", "namespace":"namespace-1565762392-6636", "resourceVersion":"703", "selfLink":"/api/v1/namespaces/namespace-1565762392-6636/pods/valid-pod", "uid":"fd0ccc46-4963-4d52-afe9-ccb76538ee08"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0814 05:59:54.019] has:missing is not found
W0814 05:59:54.120] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0814 05:59:54.221] Successful
I0814 05:59:54.221] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0814 05:59:54.222] 	template was:
I0814 05:59:54.222] 		{{.missing}}
I0814 05:59:54.222] 	raw data was:
I0814 05:59:54.224] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-08-14T05:59:53Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-08-14T05:59:53Z"}],"name":"valid-pod","namespace":"namespace-1565762392-6636","resourceVersion":"703","selfLink":"/api/v1/namespaces/namespace-1565762392-6636/pods/valid-pod","uid":"fd0ccc46-4963-4d52-afe9-ccb76538ee08"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0814 05:59:54.224] 	object given to template engine was:
I0814 05:59:54.225] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-08-14T05:59:53Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-08-14T05:59:53Z]] name:valid-pod namespace:namespace-1565762392-6636 resourceVersion:703 selfLink:/api/v1/namespaces/namespace-1565762392-6636/pods/valid-pod uid:fd0ccc46-4963-4d52-afe9-ccb76538ee08] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0814 05:59:54.226] has:map has no entry for key "missing"
I0814 05:59:55.233] Successful
I0814 05:59:55.234] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 05:59:55.234] valid-pod   0/1     Pending   0          1s
I0814 05:59:55.234] STATUS      REASON          MESSAGE
I0814 05:59:55.235] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 05:59:55.235] has:STATUS
I0814 05:59:55.236] Successful
I0814 05:59:55.236] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 05:59:55.236] valid-pod   0/1     Pending   0          1s
I0814 05:59:55.237] STATUS      REASON          MESSAGE
I0814 05:59:55.237] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 05:59:55.237] has:valid-pod
I0814 05:59:56.344] Successful
I0814 05:59:56.344] message:pod/valid-pod
I0814 05:59:56.345] has not:STATUS
I0814 05:59:56.347] Successful
I0814 05:59:56.348] message:pod/valid-pod
... skipping 144 lines ...
I0814 05:59:57.491] status:
I0814 05:59:57.491]   phase: Pending
I0814 05:59:57.491]   qosClass: Guaranteed
I0814 05:59:57.491] ---
I0814 05:59:57.491] has:name: valid-pod
I0814 05:59:57.581] Successful
I0814 05:59:57.581] message:Error from server (NotFound): pods "invalid-pod" not found
I0814 05:59:57.581] has:"invalid-pod" not found
I0814 05:59:57.678] pod "valid-pod" deleted
I0814 05:59:57.806] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 05:59:58.005] (Bpod/redis-master created
I0814 05:59:58.010] pod/valid-pod created
I0814 05:59:58.142] Successful
... skipping 35 lines ...
I0814 05:59:59.611] +++ command: run_kubectl_exec_pod_tests
I0814 05:59:59.622] +++ [0814 05:59:59] Creating namespace namespace-1565762399-22579
I0814 05:59:59.713] namespace/namespace-1565762399-22579 created
I0814 05:59:59.800] Context "test" modified.
I0814 05:59:59.808] +++ [0814 05:59:59] Testing kubectl exec POD COMMAND
I0814 05:59:59.904] Successful
I0814 05:59:59.904] message:Error from server (NotFound): pods "abc" not found
I0814 05:59:59.905] has:pods "abc" not found
I0814 06:00:00.065] pod/test-pod created
I0814 06:00:00.172] Successful
I0814 06:00:00.172] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:00:00.173] has not:pods "test-pod" not found
I0814 06:00:00.174] Successful
I0814 06:00:00.174] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:00:00.175] has not:pod or type/name must be specified
I0814 06:00:00.268] pod "test-pod" deleted
I0814 06:00:00.290] +++ exit code: 0
I0814 06:00:00.325] Recording: run_kubectl_exec_resource_name_tests
I0814 06:00:00.326] Running command: run_kubectl_exec_resource_name_tests
I0814 06:00:00.346] 
... skipping 2 lines ...
I0814 06:00:00.352] +++ command: run_kubectl_exec_resource_name_tests
I0814 06:00:00.364] +++ [0814 06:00:00] Creating namespace namespace-1565762400-27490
I0814 06:00:00.494] namespace/namespace-1565762400-27490 created
I0814 06:00:00.584] Context "test" modified.
I0814 06:00:00.592] +++ [0814 06:00:00] Testing kubectl exec TYPE/NAME COMMAND
I0814 06:00:00.720] Successful
I0814 06:00:00.721] message:error: the server doesn't have a resource type "foo"
I0814 06:00:00.721] has:error:
I0814 06:00:00.827] Successful
I0814 06:00:00.828] message:Error from server (NotFound): deployments.apps "bar" not found
I0814 06:00:00.828] has:"bar" not found
I0814 06:00:01.037] pod/test-pod created
I0814 06:00:01.244] replicaset.apps/frontend created
W0814 06:00:01.345] I0814 06:00:01.250834   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762400-27490", Name:"frontend", UID:"311c9a1f-aaa6-4cf4-9fe4-68ed63bed105", APIVersion:"apps/v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tn5wk
W0814 06:00:01.347] I0814 06:00:01.256283   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762400-27490", Name:"frontend", UID:"311c9a1f-aaa6-4cf4-9fe4-68ed63bed105", APIVersion:"apps/v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5xvfg
W0814 06:00:01.348] I0814 06:00:01.263061   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762400-27490", Name:"frontend", UID:"311c9a1f-aaa6-4cf4-9fe4-68ed63bed105", APIVersion:"apps/v1", ResourceVersion:"757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8xvc6
I0814 06:00:01.451] configmap/test-set-env-config created
I0814 06:00:01.567] Successful
I0814 06:00:01.568] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0814 06:00:01.568] has:not implemented
I0814 06:00:01.690] Successful
I0814 06:00:01.692] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:00:01.692] has not:not found
I0814 06:00:01.693] Successful
I0814 06:00:01.693] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0814 06:00:01.693] has not:pod or type/name must be specified
I0814 06:00:01.823] Successful
I0814 06:00:01.823] message:Error from server (BadRequest): pod frontend-5xvfg does not have a host assigned
I0814 06:00:01.824] has not:not found
I0814 06:00:01.825] Successful
I0814 06:00:01.825] message:Error from server (BadRequest): pod frontend-5xvfg does not have a host assigned
I0814 06:00:01.826] has not:pod or type/name must be specified
I0814 06:00:01.926] pod "test-pod" deleted
I0814 06:00:02.032] replicaset.apps "frontend" deleted
I0814 06:00:02.141] configmap "test-set-env-config" deleted
I0814 06:00:02.162] +++ exit code: 0
I0814 06:00:02.197] Recording: run_create_secret_tests
I0814 06:00:02.198] Running command: run_create_secret_tests
I0814 06:00:02.219] 
I0814 06:00:02.221] +++ Running case: test-cmd.run_create_secret_tests 
I0814 06:00:02.224] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:00:02.226] +++ command: run_create_secret_tests
I0814 06:00:02.353] Successful
I0814 06:00:02.354] message:Error from server (NotFound): secrets "mysecret" not found
I0814 06:00:02.354] has:secrets "mysecret" not found
I0814 06:00:02.570] Successful
I0814 06:00:02.571] message:Error from server (NotFound): secrets "mysecret" not found
I0814 06:00:02.571] has:secrets "mysecret" not found
I0814 06:00:02.572] Successful
I0814 06:00:02.573] message:user-specified
I0814 06:00:02.573] has:user-specified
I0814 06:00:02.672] Successful
I0814 06:00:02.782] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"f1828efe-99d2-4a16-a645-b04d01b9dab9","resourceVersion":"778","creationTimestamp":"2019-08-14T06:00:02Z"}}
... skipping 2 lines ...
I0814 06:00:03.030] has:uid
I0814 06:00:03.159] Successful
I0814 06:00:03.160] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"f1828efe-99d2-4a16-a645-b04d01b9dab9","resourceVersion":"779","creationTimestamp":"2019-08-14T06:00:02Z","managedFields":[{"manager":"kubectl","operation":"Update","apiVersion":"v1","time":"2019-08-14T06:00:03Z","fields":{"f:data":{"f:key1":{},".":{}}}}]},"data":{"key1":"config1"}}
I0814 06:00:03.160] has:config1
I0814 06:00:03.249] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"f1828efe-99d2-4a16-a645-b04d01b9dab9"}}
I0814 06:00:03.370] Successful
I0814 06:00:03.371] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0814 06:00:03.371] has:configmaps "tester-update-cm" not found
I0814 06:00:03.383] +++ exit code: 0
I0814 06:00:03.417] Recording: run_kubectl_create_kustomization_directory_tests
I0814 06:00:03.418] Running command: run_kubectl_create_kustomization_directory_tests
I0814 06:00:03.439] 
I0814 06:00:03.441] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 158 lines ...
I0814 06:00:06.842] valid-pod   0/1     Pending   0          0s
I0814 06:00:06.842] has:valid-pod
I0814 06:00:07.952] Successful
I0814 06:00:07.970] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 06:00:07.971] valid-pod   0/1     Pending   0          0s
I0814 06:00:07.971] STATUS      REASON          MESSAGE
I0814 06:00:07.972] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0814 06:00:07.972] has:Timeout exceeded while reading body
I0814 06:00:08.056] Successful
I0814 06:00:08.057] message:NAME        READY   STATUS    RESTARTS   AGE
I0814 06:00:08.057] valid-pod   0/1     Pending   0          2s
I0814 06:00:08.058] has:valid-pod
I0814 06:00:08.170] Successful
I0814 06:00:08.171] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0814 06:00:08.172] has:Invalid timeout value
I0814 06:00:08.276] pod "valid-pod" deleted
I0814 06:00:08.298] +++ exit code: 0
I0814 06:00:08.335] Recording: run_crd_tests
I0814 06:00:08.336] Running command: run_crd_tests
I0814 06:00:08.356] 
... skipping 258 lines ...
I0814 06:00:14.597] (Bfoo.company.com/test patched
I0814 06:00:14.724] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0814 06:00:14.833] (Bfoo.company.com/test patched
W0814 06:00:14.942] I0814 06:00:14.768804   53051 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 06:00:14.942] I0814 06:00:14.869100   53051 controller_utils.go:1036] Caches are synced for garbage collector controller
I0814 06:00:15.043] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0814 06:00:15.153] (B+++ [0814 06:00:15] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0814 06:00:15.243] {
I0814 06:00:15.243]     "apiVersion": "company.com/v1",
I0814 06:00:15.244]     "kind": "Foo",
I0814 06:00:15.244]     "metadata": {
I0814 06:00:15.245]         "annotations": {
I0814 06:00:15.245]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 339 lines ...
I0814 06:00:25.091] (Bnamespace/non-native-resources created
I0814 06:00:25.326] bar.company.com/test created
I0814 06:00:25.452] crd.sh:455: Successful get bars {{len .items}}: 1
I0814 06:00:25.549] (Bnamespace "non-native-resources" deleted
I0814 06:00:30.825] crd.sh:458: Successful get bars {{len .items}}: 0
I0814 06:00:31.043] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0814 06:00:31.143] Error from server (NotFound): namespaces "non-native-resources" not found
I0814 06:00:31.244] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0814 06:00:31.358] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0814 06:00:31.498] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0814 06:00:31.535] +++ exit code: 0
I0814 06:00:31.602] Recording: run_cmd_with_img_tests
I0814 06:00:31.603] Running command: run_cmd_with_img_tests
... skipping 6 lines ...
I0814 06:00:31.845] Context "test" modified.
I0814 06:00:31.853] +++ [0814 06:00:31] Testing cmd with image
W0814 06:00:31.954] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:00:31.963] I0814 06:00:31.962359   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762431-25953", Name:"test1", UID:"00b6e32e-3183-4550-9b83-59c0f27db573", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-9797f89d8 to 1
W0814 06:00:31.972] I0814 06:00:31.971649   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762431-25953", Name:"test1-9797f89d8", UID:"3f302854-053e-4ac0-bca6-1073d83752f1", APIVersion:"apps/v1", ResourceVersion:"927", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-9797f89d8-pppd9
W0814 06:00:32.063] W0814 06:00:32.063326   49577 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:00:32.065] E0814 06:00:32.065089   53051 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:32.166] Successful
I0814 06:00:32.166] message:deployment.apps/test1 created
I0814 06:00:32.167] has:deployment.apps/test1 created
I0814 06:00:32.167] deployment.apps "test1" deleted
I0814 06:00:32.230] Successful
I0814 06:00:32.230] message:error: Invalid image name "InvalidImageName": invalid reference format
I0814 06:00:32.231] has:error: Invalid image name "InvalidImageName": invalid reference format
I0814 06:00:32.242] +++ exit code: 0
I0814 06:00:32.284] +++ [0814 06:00:32] Testing recursive resources
I0814 06:00:32.291] +++ [0814 06:00:32] Creating namespace namespace-1565762432-14429
W0814 06:00:32.401] W0814 06:00:32.214291   49577 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:00:32.401] E0814 06:00:32.216862   53051 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:32.402] W0814 06:00:32.377997   49577 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:00:32.402] E0814 06:00:32.379450   53051 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:32.502] namespace/namespace-1565762432-14429 created
I0814 06:00:32.531] Context "test" modified.
W0814 06:00:32.632] W0814 06:00:32.514501   49577 cacher.go:154] Terminating all watchers from cacher *unstructured.Unstructured
W0814 06:00:32.632] E0814 06:00:32.516227   53051 reflector.go:282] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:32.733] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:33.018] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:33.020] (BSuccessful
I0814 06:00:33.021] message:pod/busybox0 created
I0814 06:00:33.021] pod/busybox1 created
I0814 06:00:33.021] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 06:00:33.022] has:error validating data: kind not set
W0814 06:00:33.122] E0814 06:00:33.066905   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:33.219] E0814 06:00:33.218393   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:33.320] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:33.355] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0814 06:00:33.358] (BSuccessful
I0814 06:00:33.359] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:33.360] has:Object 'Kind' is missing
W0814 06:00:33.470] E0814 06:00:33.380820   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:33.518] E0814 06:00:33.517979   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:33.623] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:33.943] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 06:00:33.947] (BSuccessful
I0814 06:00:33.947] message:pod/busybox0 replaced
I0814 06:00:33.948] pod/busybox1 replaced
I0814 06:00:33.948] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 06:00:33.949] has:error validating data: kind not set
I0814 06:00:34.063] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:34.192] (BSuccessful
I0814 06:00:34.192] message:Name:         busybox0
I0814 06:00:34.193] Namespace:    namespace-1565762432-14429
I0814 06:00:34.193] Priority:     0
I0814 06:00:34.193] Node:         <none>
... skipping 154 lines ...
I0814 06:00:34.219] QoS Class:        BestEffort
I0814 06:00:34.219] Node-Selectors:   <none>
I0814 06:00:34.219] Tolerations:      <none>
I0814 06:00:34.219] Events:           <none>
I0814 06:00:34.219] unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:34.220] has:Object 'Kind' is missing
W0814 06:00:34.320] E0814 06:00:34.069256   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:34.321] E0814 06:00:34.220578   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:34.383] E0814 06:00:34.382543   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:34.484] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:34.584] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0814 06:00:34.588] (BSuccessful
I0814 06:00:34.589] message:pod/busybox0 annotated
I0814 06:00:34.589] pod/busybox1 annotated
I0814 06:00:34.590] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:34.590] has:Object 'Kind' is missing
W0814 06:00:34.691] E0814 06:00:34.520362   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:34.792] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:35.072] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0814 06:00:35.075] (BSuccessful
I0814 06:00:35.076] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 06:00:35.077] pod/busybox0 configured
I0814 06:00:35.077] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0814 06:00:35.078] pod/busybox1 configured
I0814 06:00:35.078] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0814 06:00:35.079] has:error validating data: kind not set
W0814 06:00:35.180] E0814 06:00:35.071645   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:35.222] E0814 06:00:35.222118   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:35.323] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:35.385] (Bdeployment.apps/nginx created
W0814 06:00:35.486] E0814 06:00:35.385172   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:35.487] I0814 06:00:35.391533   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762432-14429", Name:"nginx", UID:"05d9a3ea-9e2a-4019-b0ee-cfd5be1e2b6e", APIVersion:"apps/v1", ResourceVersion:"953", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-bbbbb95b5 to 3
W0814 06:00:35.488] I0814 06:00:35.398355   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx-bbbbb95b5", UID:"bf89a545-6100-4866-92aa-fc35537e7f46", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-gtcv2
W0814 06:00:35.489] I0814 06:00:35.403363   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx-bbbbb95b5", UID:"bf89a545-6100-4866-92aa-fc35537e7f46", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-fl6f4
W0814 06:00:35.489] I0814 06:00:35.403874   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx-bbbbb95b5", UID:"bf89a545-6100-4866-92aa-fc35537e7f46", APIVersion:"apps/v1", ResourceVersion:"954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-bbbbb95b5-m4rml
W0814 06:00:35.522] E0814 06:00:35.522118   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:35.623] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 06:00:35.666] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0814 06:00:35.898] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0814 06:00:35.902] (BSuccessful
I0814 06:00:35.903] message:apiVersion: extensions/v1beta1
I0814 06:00:35.903] kind: Deployment
... skipping 36 lines ...
I0814 06:00:35.909]       securityContext: {}
I0814 06:00:35.909]       terminationGracePeriodSeconds: 30
I0814 06:00:35.909] status: {}
I0814 06:00:35.909] has:extensions/v1beta1
W0814 06:00:36.010] kubectl convert is DEPRECATED and will be removed in a future version.
W0814 06:00:36.010] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0814 06:00:36.073] E0814 06:00:36.073000   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:36.174] deployment.apps "nginx" deleted
I0814 06:00:36.175] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:36.382] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:36.386] (BSuccessful
I0814 06:00:36.387] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0814 06:00:36.388] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0814 06:00:36.389] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:36.389] has:Object 'Kind' is missing
W0814 06:00:36.490] E0814 06:00:36.223711   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:36.491] E0814 06:00:36.386505   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:36.524] E0814 06:00:36.523511   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:36.625] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:36.628] (BSuccessful
I0814 06:00:36.629] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:36.629] has:busybox0:busybox1:
I0814 06:00:36.632] Successful
I0814 06:00:36.633] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:36.633] has:Object 'Kind' is missing
I0814 06:00:36.757] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:36.880] (Bpod/busybox0 labeled
I0814 06:00:36.881] pod/busybox1 labeled
I0814 06:00:36.882] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:37.012] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0814 06:00:37.015] (BSuccessful
I0814 06:00:37.016] message:pod/busybox0 labeled
I0814 06:00:37.016] pod/busybox1 labeled
I0814 06:00:37.017] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:37.017] has:Object 'Kind' is missing
W0814 06:00:37.118] E0814 06:00:37.075838   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:37.219] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:37.255] (Bpod/busybox0 patched
I0814 06:00:37.256] pod/busybox1 patched
I0814 06:00:37.256] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0814 06:00:37.357] E0814 06:00:37.225407   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:37.388] E0814 06:00:37.387741   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:37.489] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0814 06:00:37.489] (BSuccessful
I0814 06:00:37.490] message:pod/busybox0 patched
I0814 06:00:37.490] pod/busybox1 patched
I0814 06:00:37.490] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:37.490] has:Object 'Kind' is missing
I0814 06:00:37.515] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:37.792] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:37.794] (BSuccessful
I0814 06:00:37.795] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 06:00:37.795] pod "busybox0" force deleted
I0814 06:00:37.796] pod "busybox1" force deleted
I0814 06:00:37.796] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0814 06:00:37.797] has:Object 'Kind' is missing
W0814 06:00:37.897] E0814 06:00:37.524838   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:37.998] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:38.118] (Breplicationcontroller/busybox0 created
I0814 06:00:38.125] replicationcontroller/busybox1 created
W0814 06:00:38.226] E0814 06:00:38.077422   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:38.227] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:00:38.227] I0814 06:00:38.126840   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565762432-14429", Name:"busybox0", UID:"e2c6a1f2-ea53-4b8b-98e3-9a2ef7e5330d", APIVersion:"v1", ResourceVersion:"984", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-2tbc2
W0814 06:00:38.228] I0814 06:00:38.139039   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565762432-14429", Name:"busybox1", UID:"88a9d01e-1473-465f-8dae-2f38829b6b23", APIVersion:"v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-622h9
W0814 06:00:38.229] E0814 06:00:38.227311   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:38.329] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:38.418] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:38.531] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 06:00:38.649] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 06:00:38.873] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 06:00:39.006] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0814 06:00:39.010] (BSuccessful
I0814 06:00:39.011] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0814 06:00:39.011] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0814 06:00:39.012] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:39.013] has:Object 'Kind' is missing
W0814 06:00:39.114] E0814 06:00:38.390372   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:39.114] E0814 06:00:38.527588   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:39.115] E0814 06:00:39.078724   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:39.215] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0814 06:00:39.233] horizontalpodautoscaler.autoscaling "busybox1" deleted
W0814 06:00:39.334] E0814 06:00:39.229065   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:39.393] E0814 06:00:39.392309   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:39.494] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:39.494] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 06:00:39.580] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 06:00:39.840] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 06:00:39.951] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0814 06:00:39.955] (BSuccessful
I0814 06:00:39.956] message:service/busybox0 exposed
I0814 06:00:39.956] service/busybox1 exposed
I0814 06:00:39.957] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:39.957] has:Object 'Kind' is missing
W0814 06:00:40.058] E0814 06:00:39.529220   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:40.080] E0814 06:00:40.080065   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:40.181] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:40.195] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0814 06:00:40.314] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0814 06:00:40.587] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0814 06:00:40.695] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0814 06:00:40.698] (BSuccessful
I0814 06:00:40.699] message:replicationcontroller/busybox0 scaled
I0814 06:00:40.699] replicationcontroller/busybox1 scaled
I0814 06:00:40.700] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:40.700] has:Object 'Kind' is missing
W0814 06:00:40.801] E0814 06:00:40.230841   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:40.802] E0814 06:00:40.393888   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:40.802] I0814 06:00:40.448488   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565762432-14429", Name:"busybox0", UID:"e2c6a1f2-ea53-4b8b-98e3-9a2ef7e5330d", APIVersion:"v1", ResourceVersion:"1006", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cgmbv
W0814 06:00:40.802] I0814 06:00:40.469670   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565762432-14429", Name:"busybox1", UID:"88a9d01e-1473-465f-8dae-2f38829b6b23", APIVersion:"v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qsh4z
W0814 06:00:40.803] E0814 06:00:40.531171   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:40.903] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:41.101] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:41.104] (BSuccessful
I0814 06:00:41.105] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 06:00:41.106] replicationcontroller "busybox0" force deleted
I0814 06:00:41.107] replicationcontroller "busybox1" force deleted
I0814 06:00:41.108] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:41.109] has:Object 'Kind' is missing
W0814 06:00:41.209] E0814 06:00:41.081426   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:41.233] E0814 06:00:41.232688   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:41.334] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:41.425] (Bdeployment.apps/nginx1-deployment created
I0814 06:00:41.435] deployment.apps/nginx0-deployment created
W0814 06:00:41.536] E0814 06:00:41.395135   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:41.537] I0814 06:00:41.432206   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762432-14429", Name:"nginx1-deployment", UID:"808469b6-7d46-4f0b-9f2e-bab922397ab3", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-84f7f49fb7 to 2
W0814 06:00:41.537] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:00:41.538] I0814 06:00:41.437932   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx1-deployment-84f7f49fb7", UID:"04696c20-5a3c-4b93-b78f-f6fb86203222", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-tkpjq
W0814 06:00:41.538] I0814 06:00:41.444758   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx1-deployment-84f7f49fb7", UID:"04696c20-5a3c-4b93-b78f-f6fb86203222", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-84f7f49fb7-zx8s4
W0814 06:00:41.539] I0814 06:00:41.445319   53051 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1565762432-14429", Name:"nginx0-deployment", UID:"d2d7ab22-1aff-4b77-9fcc-76dbf8392b7a", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57475bf54d to 2
W0814 06:00:41.539] I0814 06:00:41.453993   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx0-deployment-57475bf54d", UID:"71da5c63-99dc-4fd5-a90c-1292273abe4a", APIVersion:"apps/v1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-wjqtd
W0814 06:00:41.540] I0814 06:00:41.466402   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1565762432-14429", Name:"nginx0-deployment-57475bf54d", UID:"71da5c63-99dc-4fd5-a90c-1292273abe4a", APIVersion:"apps/v1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57475bf54d-kb26v
W0814 06:00:41.540] E0814 06:00:41.533151   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:41.640] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0814 06:00:41.705] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 06:00:41.963] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0814 06:00:41.965] (BSuccessful
I0814 06:00:41.965] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0814 06:00:41.966] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0814 06:00:41.966] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:00:41.967] has:Object 'Kind' is missing
W0814 06:00:42.083] E0814 06:00:42.082992   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:42.184] deployment.apps/nginx1-deployment paused
I0814 06:00:42.185] deployment.apps/nginx0-deployment paused
I0814 06:00:42.245] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0814 06:00:42.247] (BSuccessful
I0814 06:00:42.248] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:00:42.249] has:Object 'Kind' is missing
W0814 06:00:42.350] E0814 06:00:42.234994   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:42.398] E0814 06:00:42.398069   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:42.499] deployment.apps/nginx1-deployment resumed
I0814 06:00:42.500] deployment.apps/nginx0-deployment resumed
I0814 06:00:42.547] generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0814 06:00:42.549] (BSuccessful
I0814 06:00:42.550] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:00:42.551] has:Object 'Kind' is missing
W0814 06:00:42.652] E0814 06:00:42.534786   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:42.752] Successful
I0814 06:00:42.753] message:deployment.apps/nginx1-deployment 
I0814 06:00:42.754] REVISION  CHANGE-CAUSE
I0814 06:00:42.754] 1         <none>
I0814 06:00:42.754] 
I0814 06:00:42.755] deployment.apps/nginx0-deployment 
I0814 06:00:42.755] REVISION  CHANGE-CAUSE
I0814 06:00:42.755] 1         <none>
I0814 06:00:42.756] 
I0814 06:00:42.756] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:00:42.757] has:nginx0-deployment
I0814 06:00:42.757] Successful
I0814 06:00:42.757] message:deployment.apps/nginx1-deployment 
I0814 06:00:42.757] REVISION  CHANGE-CAUSE
I0814 06:00:42.758] 1         <none>
I0814 06:00:42.758] 
I0814 06:00:42.758] deployment.apps/nginx0-deployment 
I0814 06:00:42.758] REVISION  CHANGE-CAUSE
I0814 06:00:42.758] 1         <none>
I0814 06:00:42.759] 
I0814 06:00:42.759] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:00:42.760] has:nginx1-deployment
I0814 06:00:42.760] Successful
I0814 06:00:42.760] message:deployment.apps/nginx1-deployment 
I0814 06:00:42.760] REVISION  CHANGE-CAUSE
I0814 06:00:42.760] 1         <none>
I0814 06:00:42.761] 
I0814 06:00:42.761] deployment.apps/nginx0-deployment 
I0814 06:00:42.761] REVISION  CHANGE-CAUSE
I0814 06:00:42.761] 1         <none>
I0814 06:00:42.761] 
I0814 06:00:42.762] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0814 06:00:42.762] has:Object 'Kind' is missing
I0814 06:00:42.795] deployment.apps "nginx1-deployment" force deleted
I0814 06:00:42.812] deployment.apps "nginx0-deployment" force deleted
W0814 06:00:42.913] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 06:00:42.917] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0814 06:00:43.085] E0814 06:00:43.084349   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:43.237] E0814 06:00:43.236547   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:43.400] E0814 06:00:43.399428   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:43.537] E0814 06:00:43.536360   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:43.938] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:44.163] (Breplicationcontroller/busybox0 created
I0814 06:00:44.167] replicationcontroller/busybox1 created
W0814 06:00:44.268] E0814 06:00:44.086807   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:44.270] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0814 06:00:44.271] I0814 06:00:44.169364   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565762432-14429", Name:"busybox0", UID:"ffb04b8e-c1c6-4ac3-9bd1-8ea79d2386a0", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-pkn9n
W0814 06:00:44.272] I0814 06:00:44.171543   53051 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1565762432-14429", Name:"busybox1", UID:"546a6567-5ade-41e0-8867-98ae92679607", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qg2j2
W0814 06:00:44.273] E0814 06:00:44.238645   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:44.373] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0814 06:00:44.428] (BSuccessful
I0814 06:00:44.429] message:no rollbacker has been implemented for "ReplicationController"
I0814 06:00:44.429] no rollbacker has been implemented for "ReplicationController"
I0814 06:00:44.430] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.430] has:no rollbacker has been implemented for "ReplicationController"
I0814 06:00:44.431] Successful
I0814 06:00:44.431] message:no rollbacker has been implemented for "ReplicationController"
I0814 06:00:44.432] no rollbacker has been implemented for "ReplicationController"
I0814 06:00:44.432] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.433] has:Object 'Kind' is missing
W0814 06:00:44.534] E0814 06:00:44.400890   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:44.538] E0814 06:00:44.537937   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:44.615] I0814 06:00:44.615286   53051 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0814 06:00:44.716] I0814 06:00:44.715889   53051 controller_utils.go:1036] Caches are synced for resource quota controller
W0814 06:00:44.725] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0814 06:00:44.741] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.842] Successful
I0814 06:00:44.843] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.843] error: replicationcontrollers "busybox0" pausing is not supported
I0814 06:00:44.843] error: replicationcontrollers "busybox1" pausing is not supported
I0814 06:00:44.843] has:Object 'Kind' is missing
I0814 06:00:44.844] Successful
I0814 06:00:44.844] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.844] error: replicationcontrollers "busybox0" pausing is not supported
I0814 06:00:44.844] error: replicationcontrollers "busybox1" pausing is not supported
I0814 06:00:44.845] has:replicationcontrollers "busybox0" pausing is not supported
I0814 06:00:44.845] Successful
I0814 06:00:44.845] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.845] error: replicationcontrollers "busybox0" pausing is not supported
I0814 06:00:44.846] error: replicationcontrollers "busybox1" pausing is not supported
I0814 06:00:44.846] has:replicationcontrollers "busybox1" pausing is not supported
I0814 06:00:44.846] Successful
I0814 06:00:44.846] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.847] error: replicationcontrollers "busybox0" resuming is not supported
I0814 06:00:44.847] error: replicationcontrollers "busybox1" resuming is not supported
I0814 06:00:44.847] has:Object 'Kind' is missing
I0814 06:00:44.847] Successful
I0814 06:00:44.848] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.848] error: replicationcontrollers "busybox0" resuming is not supported
I0814 06:00:44.848] error: replicationcontrollers "busybox1" resuming is not supported
I0814 06:00:44.848] has:replicationcontrollers "busybox0" resuming is not supported
I0814 06:00:44.848] Successful
I0814 06:00:44.849] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0814 06:00:44.849] error: replicationcontrollers "busybox0" resuming is not supported
I0814 06:00:44.849] error: replicationcontrollers "busybox1" resuming is not supported
I0814 06:00:44.850] has:replicationcontrollers "busybox0" resuming is not supported
I0814 06:00:44.850] replicationcontroller "busybox0" force deleted
I0814 06:00:44.850] replicationcontroller "busybox1" force deleted
W0814 06:00:45.021] I0814 06:00:45.021186   53051 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0814 06:00:45.089] E0814 06:00:45.088278   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:45.122] I0814 06:00:45.121487   53051 controller_utils.go:1036] Caches are synced for garbage collector controller
W0814 06:00:45.240] E0814 06:00:45.240128   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:45.403] E0814 06:00:45.402434   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:45.541] E0814 06:00:45.540724   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:45.753] Recording: run_namespace_tests
I0814 06:00:45.753] Running command: run_namespace_tests
I0814 06:00:45.774] 
I0814 06:00:45.776] +++ Running case: test-cmd.run_namespace_tests 
I0814 06:00:45.779] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:00:45.781] +++ command: run_namespace_tests
I0814 06:00:45.791] +++ [0814 06:00:45] Testing kubectl(v1:namespaces)
I0814 06:00:45.883] namespace/my-namespace created
I0814 06:00:45.995] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 06:00:46.093] (Bnamespace "my-namespace" deleted
W0814 06:00:46.194] E0814 06:00:46.090280   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:46.241] E0814 06:00:46.241203   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:46.404] E0814 06:00:46.403952   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:46.542] E0814 06:00:46.541989   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:47.092] E0814 06:00:47.091946   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:47.243] E0814 06:00:47.242561   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:47.405] E0814 06:00:47.405150   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:47.544] E0814 06:00:47.543361   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:48.094] E0814 06:00:48.093370   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:48.244] E0814 06:00:48.244046   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:48.407] E0814 06:00:48.406421   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:48.545] E0814 06:00:48.544753   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:49.095] E0814 06:00:49.094819   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:49.247] E0814 06:00:49.246976   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:49.408] E0814 06:00:49.407837   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:49.546] E0814 06:00:49.546179   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:50.096] E0814 06:00:50.096055   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:50.248] E0814 06:00:50.247984   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:50.409] E0814 06:00:50.409340   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:50.548] E0814 06:00:50.548267   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:51.097] E0814 06:00:51.097077   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:51.198] namespace/my-namespace condition met
I0814 06:00:51.262] Successful
I0814 06:00:51.262] message:Error from server (NotFound): namespaces "my-namespace" not found
I0814 06:00:51.263] has: not found
I0814 06:00:51.347] namespace/my-namespace created
I0814 06:00:51.434] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0814 06:00:51.638] (BSuccessful
I0814 06:00:51.639] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 06:00:51.639] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0814 06:00:51.642] namespace "namespace-1565762404-8278" deleted
I0814 06:00:51.642] namespace "namespace-1565762405-7093" deleted
I0814 06:00:51.642] namespace "namespace-1565762408-604" deleted
I0814 06:00:51.642] namespace "namespace-1565762410-8632" deleted
I0814 06:00:51.642] namespace "namespace-1565762431-25953" deleted
I0814 06:00:51.643] namespace "namespace-1565762432-14429" deleted
I0814 06:00:51.643] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 06:00:51.643] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 06:00:51.643] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 06:00:51.643] has:warning: deleting cluster-scoped resources
I0814 06:00:51.644] Successful
I0814 06:00:51.645] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0814 06:00:51.645] namespace "kube-node-lease" deleted
I0814 06:00:51.645] namespace "my-namespace" deleted
I0814 06:00:51.645] namespace "namespace-1565762294-7851" deleted
... skipping 27 lines ...
I0814 06:00:51.650] namespace "namespace-1565762404-8278" deleted
I0814 06:00:51.650] namespace "namespace-1565762405-7093" deleted
I0814 06:00:51.650] namespace "namespace-1565762408-604" deleted
I0814 06:00:51.650] namespace "namespace-1565762410-8632" deleted
I0814 06:00:51.650] namespace "namespace-1565762431-25953" deleted
I0814 06:00:51.650] namespace "namespace-1565762432-14429" deleted
I0814 06:00:51.651] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0814 06:00:51.651] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0814 06:00:51.651] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0814 06:00:51.651] has:namespace "my-namespace" deleted
I0814 06:00:51.750] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0814 06:00:51.828] (Bnamespace/other created
I0814 06:00:51.928] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0814 06:00:52.025] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:52.194] (Bpod/valid-pod created
I0814 06:00:52.289] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:00:52.385] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:00:52.483] (BSuccessful
I0814 06:00:52.484] message:error: a resource cannot be retrieved by name across all namespaces
I0814 06:00:52.484] has:a resource cannot be retrieved by name across all namespaces
W0814 06:00:52.585] E0814 06:00:51.249510   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:52.585] E0814 06:00:51.410800   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:52.586] E0814 06:00:51.549622   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:52.586] E0814 06:00:52.098640   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:52.586] E0814 06:00:52.250410   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:52.587] E0814 06:00:52.412078   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:52.587] E0814 06:00:52.551296   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:52.688] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0814 06:00:52.698] (Bpod "valid-pod" force deleted
W0814 06:00:52.799] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0814 06:00:52.899] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:52.900] (Bnamespace "other" deleted
W0814 06:00:53.101] E0814 06:00:53.100531   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:53.252] E0814 06:00:53.252105   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:53.413] E0814 06:00:53.413284   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:53.553] E0814 06:00:53.552626   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:53.745] I0814 06:00:53.745313   53051 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1565762432-14429
W0814 06:00:53.750] I0814 06:00:53.750390   53051 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1565762432-14429
W0814 06:00:54.103] E0814 06:00:54.102975   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:54.254] E0814 06:00:54.253363   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:54.415] E0814 06:00:54.414906   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:54.554] E0814 06:00:54.554118   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:55.117] E0814 06:00:55.117006   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:55.255] E0814 06:00:55.255064   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:55.417] E0814 06:00:55.416466   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:55.556] E0814 06:00:55.555506   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:56.120] E0814 06:00:56.119729   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:56.258] E0814 06:00:56.257590   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:56.418] E0814 06:00:56.417979   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:56.559] E0814 06:00:56.558715   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:57.122] E0814 06:00:57.122215   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:57.262] E0814 06:00:57.260664   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:57.423] E0814 06:00:57.422271   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:57.562] E0814 06:00:57.561799   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:58.004] +++ exit code: 0
I0814 06:00:58.038] Recording: run_secrets_test
I0814 06:00:58.039] Running command: run_secrets_test
I0814 06:00:58.062] 
I0814 06:00:58.064] +++ Running case: test-cmd.run_secrets_test 
I0814 06:00:58.066] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 35 lines ...
I0814 06:00:58.340]   key1: dmFsdWUx
I0814 06:00:58.340] kind: Secret
I0814 06:00:58.340] metadata:
I0814 06:00:58.340]   creationTimestamp: null
I0814 06:00:58.341]   name: test
I0814 06:00:58.341] has not:example.com
W0814 06:00:58.442] E0814 06:00:58.123606   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:58.443] E0814 06:00:58.262079   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:58.443] I0814 06:00:58.324769   69617 loader.go:375] Config loaded from file:  /tmp/tmp.HGEMZR7EeV/.kube/config
W0814 06:00:58.444] E0814 06:00:58.423564   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:58.544] core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
I0814 06:00:58.545] (Bnamespace/test-secrets created
I0814 06:00:58.631] core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
I0814 06:00:58.752] (Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:58.848] (Bsecret/test-secret created
W0814 06:00:58.949] E0814 06:00:58.563053   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:59.050] core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 06:00:59.082] (Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
I0814 06:00:59.298] (Bsecret "test-secret" deleted
W0814 06:00:59.399] E0814 06:00:59.125079   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:59.399] E0814 06:00:59.263575   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:00:59.425] E0814 06:00:59.424864   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:59.526] core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:00:59.557] (Bsecret/test-secret created
W0814 06:00:59.658] E0814 06:00:59.564958   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:00:59.759] core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 06:00:59.801] (Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
I0814 06:01:00.062] (Bsecret "test-secret" deleted
W0814 06:01:00.164] E0814 06:01:00.126903   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:00.264] core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:01:00.283] (Bsecret/test-secret created
W0814 06:01:00.384] E0814 06:01:00.265018   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:00.428] E0814 06:01:00.427667   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:00.529] core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 06:01:00.529] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 06:01:00.660] (Bsecret "test-secret" deleted
W0814 06:01:00.761] E0814 06:01:00.566987   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:00.862] secret/test-secret created
I0814 06:01:00.896] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0814 06:01:01.010] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0814 06:01:01.148] (Bsecret "test-secret" deleted
W0814 06:01:01.249] E0814 06:01:01.128658   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:01.267] E0814 06:01:01.266614   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:01.368] secret/secret-string-data created
I0814 06:01:01.451] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 06:01:01.555] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0814 06:01:01.662] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0814 06:01:01.748] (Bsecret "secret-string-data" deleted
W0814 06:01:01.849] E0814 06:01:01.429992   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:01.850] E0814 06:01:01.568914   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:01.950] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:01:02.049] (Bsecret "test-secret" deleted
W0814 06:01:02.150] E0814 06:01:02.130141   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:02.251] namespace "test-secrets" deleted
W0814 06:01:02.351] E0814 06:01:02.267906   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:02.432] E0814 06:01:02.431317   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:02.571] E0814 06:01:02.570333   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:03.131] E0814 06:01:03.131314   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:03.270] E0814 06:01:03.269533   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:03.433] E0814 06:01:03.432586   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:03.572] E0814 06:01:03.571693   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:04.134] E0814 06:01:04.133561   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:04.272] E0814 06:01:04.271818   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:04.435] E0814 06:01:04.434892   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:04.573] E0814 06:01:04.573092   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:05.135] E0814 06:01:05.134878   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:05.273] E0814 06:01:05.273094   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:05.436] E0814 06:01:05.436215   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:05.575] E0814 06:01:05.574427   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:06.136] E0814 06:01:06.136187   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:06.275] E0814 06:01:06.274422   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:06.438] E0814 06:01:06.437580   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:06.577] E0814 06:01:06.576917   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:07.138] E0814 06:01:07.137523   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:07.277] E0814 06:01:07.276550   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:07.378] +++ exit code: 0
I0814 06:01:07.378] Recording: run_configmap_tests
I0814 06:01:07.378] Running command: run_configmap_tests
I0814 06:01:07.379] 
I0814 06:01:07.379] +++ Running case: test-cmd.run_configmap_tests 
I0814 06:01:07.379] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:01:07.380] +++ command: run_configmap_tests
I0814 06:01:07.392] +++ [0814 06:01:07] Creating namespace namespace-1565762467-6854
I0814 06:01:07.491] namespace/namespace-1565762467-6854 created
I0814 06:01:07.583] Context "test" modified.
I0814 06:01:07.592] +++ [0814 06:01:07] Testing configmaps
W0814 06:01:07.693] E0814 06:01:07.438980   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:07.693] E0814 06:01:07.578253   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:07.850] configmap/test-configmap created
I0814 06:01:07.970] core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
I0814 06:01:08.103] (Bconfigmap "test-configmap" deleted
W0814 06:01:08.205] E0814 06:01:08.139015   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:08.279] E0814 06:01:08.279184   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:08.381] core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
I0814 06:01:08.438] (Bnamespace/test-configmaps created
I0814 06:01:08.451] core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
I0814 06:01:08.561] (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
I0814 06:01:08.666] (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
I0814 06:01:08.761] (Bconfigmap/test-configmap created
I0814 06:01:08.853] configmap/test-binary-configmap created
W0814 06:01:08.954] E0814 06:01:08.440803   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:08.955] E0814 06:01:08.580035   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:09.055] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0814 06:01:09.079] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0814 06:01:09.407] (Bconfigmap "test-configmap" deleted
W0814 06:01:09.508] E0814 06:01:09.140321   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:09.509] E0814 06:01:09.280528   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:09.509] E0814 06:01:09.442640   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:09.582] E0814 06:01:09.581462   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:09.683] configmap "test-binary-configmap" deleted
I0814 06:01:09.683] namespace "test-configmaps" deleted
W0814 06:01:10.144] E0814 06:01:10.142936   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:10.283] E0814 06:01:10.282299   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:10.444] E0814 06:01:10.444084   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:10.585] E0814 06:01:10.584131   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:11.146] E0814 06:01:11.145191   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:11.284] E0814 06:01:11.283745   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:11.446] E0814 06:01:11.445733   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:11.586] E0814 06:01:11.585904   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:12.147] E0814 06:01:12.146447   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:12.285] E0814 06:01:12.285080   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:12.447] E0814 06:01:12.447054   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:12.587] E0814 06:01:12.587242   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:13.149] E0814 06:01:13.148934   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:13.287] E0814 06:01:13.286525   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:13.449] E0814 06:01:13.448443   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:13.589] E0814 06:01:13.588898   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:14.151] E0814 06:01:14.150416   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:14.288] E0814 06:01:14.287701   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:14.450] E0814 06:01:14.449872   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:14.590] E0814 06:01:14.590157   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:14.831] +++ exit code: 0
I0814 06:01:15.015] Recording: run_client_config_tests
I0814 06:01:15.015] Running command: run_client_config_tests
I0814 06:01:15.036] 
I0814 06:01:15.039] +++ Running case: test-cmd.run_client_config_tests 
I0814 06:01:15.042] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:01:15.045] +++ command: run_client_config_tests
I0814 06:01:15.059] +++ [0814 06:01:15] Creating namespace namespace-1565762475-5171
I0814 06:01:15.158] namespace/namespace-1565762475-5171 created
I0814 06:01:15.249] Context "test" modified.
I0814 06:01:15.257] +++ [0814 06:01:15] Testing client config
I0814 06:01:15.350] Successful
I0814 06:01:15.351] message:error: stat missing: no such file or directory
I0814 06:01:15.352] has:missing: no such file or directory
I0814 06:01:15.444] Successful
I0814 06:01:15.444] message:error: stat missing: no such file or directory
I0814 06:01:15.445] has:missing: no such file or directory
I0814 06:01:15.532] Successful
I0814 06:01:15.533] message:error: stat missing: no such file or directory
I0814 06:01:15.533] has:missing: no such file or directory
I0814 06:01:15.622] Successful
I0814 06:01:15.677] message:Error in configuration: context was not found for specified context: missing-context
I0814 06:01:15.677] has:context was not found for specified context: missing-context
I0814 06:01:15.711] Successful
I0814 06:01:15.712] message:error: no server found for cluster "missing-cluster"
I0814 06:01:15.713] has:no server found for cluster "missing-cluster"
I0814 06:01:15.802] Successful
I0814 06:01:15.893] message:error: auth info "missing-user" does not exist
I0814 06:01:15.893] has:auth info "missing-user" does not exist
I0814 06:01:15.962] Successful
I0814 06:01:15.963] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0814 06:01:15.964] has:error loading config file
I0814 06:01:16.060] Successful
I0814 06:01:16.156] message:error: stat missing-config: no such file or directory
I0814 06:01:16.156] has:no such file or directory
I0814 06:01:16.170] +++ exit code: 0
W0814 06:01:16.272] E0814 06:01:15.151840   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:16.273] E0814 06:01:15.289077   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:16.273] E0814 06:01:15.451599   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:16.274] E0814 06:01:15.591897   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:16.274] E0814 06:01:16.153226   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:16.291] E0814 06:01:16.291048   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:16.393] Recording: run_service_accounts_tests
I0814 06:01:16.586] Running command: run_service_accounts_tests
I0814 06:01:16.588] 
I0814 06:01:16.590] +++ Running case: test-cmd.run_service_accounts_tests 
I0814 06:01:16.593] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:01:16.597] +++ command: run_service_accounts_tests
I0814 06:01:16.610] +++ [0814 06:01:16] Creating namespace namespace-1565762476-18110
I0814 06:01:16.711] namespace/namespace-1565762476-18110 created
I0814 06:01:16.801] Context "test" modified.
I0814 06:01:16.810] +++ [0814 06:01:16] Testing service accounts
W0814 06:01:16.912] E0814 06:01:16.453329   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:16.982] E0814 06:01:16.593580   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:17.084] core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
I0814 06:01:17.102] (Bnamespace/test-service-accounts created
W0814 06:01:17.203] E0814 06:01:17.154585   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:17.295] E0814 06:01:17.292843   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:17.396] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0814 06:01:17.396] (Bserviceaccount/test-service-account created
W0814 06:01:17.499] E0814 06:01:17.454680   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:17.595] E0814 06:01:17.594943   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:17.696] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0814 06:01:17.697] (Bserviceaccount "test-service-account" deleted
I0814 06:01:17.713] namespace "test-service-accounts" deleted
W0814 06:01:18.157] E0814 06:01:18.156273   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:18.295] E0814 06:01:18.294427   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:18.457] E0814 06:01:18.456529   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:18.597] E0814 06:01:18.596703   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:19.159] E0814 06:01:19.158929   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:19.297] E0814 06:01:19.296957   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:19.458] E0814 06:01:19.457962   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:19.600] E0814 06:01:19.599503   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:20.162] E0814 06:01:20.161524   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:20.300] E0814 06:01:20.299376   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:20.461] E0814 06:01:20.460828   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:20.602] E0814 06:01:20.602004   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:21.163] E0814 06:01:21.162816   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:21.302] E0814 06:01:21.302054   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:21.463] E0814 06:01:21.462344   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:21.604] E0814 06:01:21.603421   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:22.164] E0814 06:01:22.164182   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:22.304] E0814 06:01:22.303310   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:22.464] E0814 06:01:22.463665   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:22.605] E0814 06:01:22.604790   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:22.826] +++ exit code: 0
I0814 06:01:22.863] Recording: run_job_tests
I0814 06:01:22.864] Running command: run_job_tests
I0814 06:01:22.884] 
I0814 06:01:22.887] +++ Running case: test-cmd.run_job_tests 
I0814 06:01:22.890] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:01:22.893] +++ command: run_job_tests
I0814 06:01:22.907] +++ [0814 06:01:22] Creating namespace namespace-1565762482-11338
I0814 06:01:23.004] namespace/namespace-1565762482-11338 created
I0814 06:01:23.098] Context "test" modified.
I0814 06:01:23.106] +++ [0814 06:01:23] Testing job
W0814 06:01:23.207] E0814 06:01:23.165480   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:23.305] E0814 06:01:23.304562   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:23.406] batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
I0814 06:01:23.406] (Bnamespace/test-jobs created
I0814 06:01:23.443] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I0814 06:01:23.548] (Bcronjob.batch/pi created
W0814 06:01:23.649] E0814 06:01:23.465033   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:23.650] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:01:23.650] E0814 06:01:23.606137   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:23.751] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I0814 06:01:23.767] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I0814 06:01:23.770] pi     59 23 31 2 *   False     0        <none>          0s
I0814 06:01:23.880] Name:                          pi
I0814 06:01:23.880] Namespace:                     test-jobs
I0814 06:01:23.881] Labels:                        run=pi
I0814 06:01:23.881] Annotations:                   <none>
I0814 06:01:23.881] Schedule:                      59 23 31 2 *
I0814 06:01:23.881] Concurrency Policy:            Allow
I0814 06:01:23.881] Suspend:                       False
I0814 06:01:23.882] Successful Job History Limit:  3
I0814 06:01:23.882] Failed Job History Limit:      1
I0814 06:01:23.882] Starting Deadline Seconds:     <unset>
I0814 06:01:23.882] Selector:                      <unset>
I0814 06:01:23.882] Parallelism:                   <unset>
I0814 06:01:23.882] Completions:                   <unset>
I0814 06:01:23.882] Pod Template:
I0814 06:01:23.883]   Labels:  run=pi
... skipping 18 lines ...
I0814 06:01:23.885] Events:              <none>
I0814 06:01:23.985] Successful
I0814 06:01:23.985] message:job.batch/test-job
I0814 06:01:23.985] has:job.batch/test-job
I0814 06:01:24.101] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I0814 06:01:24.221] (Bjob.batch/test-job created
W0814 06:01:24.322] E0814 06:01:24.166837   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:24.323] I0814 06:01:24.222804   53051 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"0339dc30-3927-418d-a2e3-29153dfb49fe", APIVersion:"batch/v1", ResourceVersion:"1361", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-47jz7
W0814 06:01:24.324] E0814 06:01:24.305876   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:24.424] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I0814 06:01:24.459] (BNAME       COMPLETIONS   DURATION   AGE
I0814 06:01:24.460] test-job   0/1           0s         0s
W0814 06:01:24.561] E0814 06:01:24.466390   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:24.608] E0814 06:01:24.607523   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:24.708] Name:           test-job
I0814 06:01:24.709] Namespace:      test-jobs
I0814 06:01:24.709] Selector:       controller-uid=0339dc30-3927-418d-a2e3-29153dfb49fe
I0814 06:01:24.710] Labels:         controller-uid=0339dc30-3927-418d-a2e3-29153dfb49fe
I0814 06:01:24.710]                 job-name=test-job
I0814 06:01:24.710]                 run=pi
I0814 06:01:24.711] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0814 06:01:24.711] Controlled By:  CronJob/pi
I0814 06:01:24.711] Parallelism:    1
I0814 06:01:24.711] Completions:    1
I0814 06:01:24.712] Start Time:     Wed, 14 Aug 2019 06:01:24 +0000
I0814 06:01:24.712] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0814 06:01:24.712] Pod Template:
I0814 06:01:24.713]   Labels:  controller-uid=0339dc30-3927-418d-a2e3-29153dfb49fe
I0814 06:01:24.713]            job-name=test-job
I0814 06:01:24.713]            run=pi
I0814 06:01:24.714]   Containers:
I0814 06:01:24.714]    pi:
... skipping 15 lines ...
I0814 06:01:24.719]   Type    Reason            Age   From            Message
I0814 06:01:24.719]   ----    ------            ----  ----            -------
I0814 06:01:24.719]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-47jz7
I0814 06:01:24.720] job.batch "test-job" deleted
I0814 06:01:24.822] cronjob.batch "pi" deleted
I0814 06:01:24.931] namespace "test-jobs" deleted
W0814 06:01:25.168] E0814 06:01:25.168163   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:25.307] E0814 06:01:25.307144   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:25.469] E0814 06:01:25.468213   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:25.610] E0814 06:01:25.609647   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:26.170] E0814 06:01:26.169624   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:26.310] E0814 06:01:26.309497   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:26.471] E0814 06:01:26.471226   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:26.613] E0814 06:01:26.612482   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:27.171] E0814 06:01:27.170959   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:27.311] E0814 06:01:27.311171   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:27.474] E0814 06:01:27.473752   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:27.614] E0814 06:01:27.613894   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:28.173] E0814 06:01:28.172542   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:28.314] E0814 06:01:28.313566   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:28.475] E0814 06:01:28.475069   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:28.615] E0814 06:01:28.615253   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:29.174] E0814 06:01:29.173889   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:29.316] E0814 06:01:29.316271   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:29.477] E0814 06:01:29.476323   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:29.617] E0814 06:01:29.616562   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:30.048] +++ exit code: 0
I0814 06:01:30.087] Recording: run_create_job_tests
I0814 06:01:30.087] Running command: run_create_job_tests
I0814 06:01:30.108] 
I0814 06:01:30.110] +++ Running case: test-cmd.run_create_job_tests 
I0814 06:01:30.113] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:01:30.115] +++ command: run_create_job_tests
I0814 06:01:30.130] +++ [0814 06:01:30] Creating namespace namespace-1565762490-26979
I0814 06:01:30.230] namespace/namespace-1565762490-26979 created
I0814 06:01:30.320] Context "test" modified.
W0814 06:01:30.421] E0814 06:01:30.175261   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:30.422] E0814 06:01:30.317687   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:30.425] I0814 06:01:30.424929   53051 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565762490-26979", Name:"test-job", UID:"56a32b53-5931-44a6-bb9a-b7720551f461", APIVersion:"batch/v1", ResourceVersion:"1379", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-xnsxt
W0814 06:01:30.478] E0814 06:01:30.477900   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:30.579] job.batch/test-job created
I0814 06:01:30.580] create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
I0814 06:01:30.657] (Bjob.batch "test-job" deleted
W0814 06:01:30.758] E0814 06:01:30.617932   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:30.765] I0814 06:01:30.764853   53051 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565762490-26979", Name:"test-job-pi", UID:"8fa111c5-4177-4aca-8fe3-7ad0a6292108", APIVersion:"batch/v1", ResourceVersion:"1386", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-lp8fv
I0814 06:01:30.866] job.batch/test-job-pi created
I0814 06:01:30.900] create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
I0814 06:01:30.997] (Bjob.batch "test-job-pi" deleted
W0814 06:01:31.098] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0814 06:01:31.177] E0814 06:01:31.176653   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:31.245] I0814 06:01:31.244394   53051 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1565762490-26979", Name:"my-pi", UID:"bcf71da2-f2ae-4eb2-9c22-fac6a815e61d", APIVersion:"batch/v1", ResourceVersion:"1394", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-zqb8j
W0814 06:01:31.319] E0814 06:01:31.318910   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:31.420] cronjob.batch/test-pi created
I0814 06:01:31.420] job.batch/my-pi created
I0814 06:01:31.420] Successful
I0814 06:01:31.421] message:[perl -Mbignum=bpi -wle print bpi(10)]
I0814 06:01:31.421] has:perl -Mbignum=bpi -wle print bpi(10)
I0814 06:01:31.461] job.batch "my-pi" deleted
W0814 06:01:31.562] E0814 06:01:31.478999   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:31.620] E0814 06:01:31.619355   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:31.721] cronjob.batch "test-pi" deleted
I0814 06:01:31.722] +++ exit code: 0
I0814 06:01:31.722] Recording: run_pod_templates_tests
I0814 06:01:31.722] Running command: run_pod_templates_tests
I0814 06:01:31.723] 
I0814 06:01:31.723] +++ Running case: test-cmd.run_pod_templates_tests 
... skipping 2 lines ...
I0814 06:01:31.724] +++ [0814 06:01:31] Creating namespace namespace-1565762491-11493
I0814 06:01:31.776] namespace/namespace-1565762491-11493 created
I0814 06:01:31.869] Context "test" modified.
I0814 06:01:31.876] +++ [0814 06:01:31] Testing pod templates
I0814 06:01:31.985] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:01:32.202] (Bpodtemplate/nginx created
W0814 06:01:32.303] E0814 06:01:32.177902   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:32.304] I0814 06:01:32.180902   49577 controller.go:606] quota admission added evaluator for: podtemplates
W0814 06:01:32.322] E0814 06:01:32.321388   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:32.423] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 06:01:32.433] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0814 06:01:32.435] nginx   nginx        nginx    name=nginx
W0814 06:01:32.536] E0814 06:01:32.480663   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:32.621] E0814 06:01:32.620698   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:32.722] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0814 06:01:32.766] (Bpodtemplate "nginx" deleted
I0814 06:01:32.899] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0814 06:01:32.912] (B+++ exit code: 0
I0814 06:01:32.952] Recording: run_service_tests
I0814 06:01:32.953] Running command: run_service_tests
I0814 06:01:32.975] 
I0814 06:01:32.979] +++ Running case: test-cmd.run_service_tests 
I0814 06:01:32.982] +++ working dir: /go/src/k8s.io/kubernetes
I0814 06:01:32.984] +++ command: run_service_tests
I0814 06:01:33.085] Context "test" modified.
I0814 06:01:33.094] +++ [0814 06:01:33] Testing kubectl(v1:services)
W0814 06:01:33.195] E0814 06:01:33.179748   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:33.323] core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:01:33.439] (Bservice/redis-master created
W0814 06:01:33.540] E0814 06:01:33.323062   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:33.540] E0814 06:01:33.482395   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:33.622] E0814 06:01:33.621980   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:33.723] core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:01:33.730] (Bcore.sh:864: Successful describe services redis-master:
I0814 06:01:33.731] Name:              redis-master
I0814 06:01:33.731] Namespace:         default
I0814 06:01:33.731] Labels:            app=redis
I0814 06:01:33.731]                    role=master
... skipping 51 lines ...
I0814 06:01:34.180] Port:              <unset>  6379/TCP
I0814 06:01:34.180] TargetPort:        6379/TCP
I0814 06:01:34.180] Endpoints:         <none>
I0814 06:01:34.180] Session Affinity:  None
I0814 06:01:34.180] Events:            <none>
I0814 06:01:34.181] (B
W0814 06:01:34.281] E0814 06:01:34.181119   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:34.325] E0814 06:01:34.324461   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:34.426] Successful describe services:
I0814 06:01:34.426] Name:              kubernetes
I0814 06:01:34.427] Namespace:         default
I0814 06:01:34.427] Labels:            component=apiserver
I0814 06:01:34.427]                    provider=kubernetes
I0814 06:01:34.428] Annotations:       <none>
... skipping 124 lines ...
I0814 06:01:35.017]   - port: 6379
I0814 06:01:35.018]     targetPort: 6379
I0814 06:01:35.018]   selector:
I0814 06:01:35.018]     role: padawan
I0814 06:01:35.018] status:
I0814 06:01:35.018]   loadBalancer: {}
W0814 06:01:35.119] E0814 06:01:34.484025   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:35.120] E0814 06:01:34.623254   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:35.183] E0814 06:01:35.182965   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:35.284] apiVersion: v1
I0814 06:01:35.285] kind: Service
I0814 06:01:35.285] metadata:
I0814 06:01:35.286]   creationTimestamp: "2019-08-14T06:01:33Z"
I0814 06:01:35.286]   labels:
I0814 06:01:35.286]     app: redis
... skipping 43 lines ...
I0814 06:01:35.295]   type: ClusterIP
I0814 06:01:35.295] status:
I0814 06:01:35.295]   loadBalancer: {}
I0814 06:01:35.295] service/redis-master selector updated
I0814 06:01:35.362] core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
I0814 06:01:35.508] (Bservice/redis-master selector updated
W0814 06:01:35.609] E0814 06:01:35.327373   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:35.610] E0814 06:01:35.485409   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:35.625] E0814 06:01:35.625045   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:35.726] core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 06:01:35.768] (BapiVersion: v1
I0814 06:01:35.769] kind: Service
I0814 06:01:35.769] metadata:
I0814 06:01:35.770]   creationTimestamp: "2019-08-14T06:01:33Z"
I0814 06:01:35.770]   labels:
... skipping 47 lines ...
I0814 06:01:35.775]   selector:
I0814 06:01:35.775]     role: padawan
I0814 06:01:35.775]   sessionAffinity: None
I0814 06:01:35.775]   type: ClusterIP
I0814 06:01:35.776] status:
I0814 06:01:35.776]   loadBalancer: {}
W0814 06:01:35.876] error: you must specify resources by --filename when --local is set.
W0814 06:01:35.877] Example resource specifications include:
W0814 06:01:35.877]    '-f rsrc.yaml'
W0814 06:01:35.877]    '--filename=rsrc.json'
I0814 06:01:35.990] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0814 06:01:36.234] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:01:36.346] (Bservice "redis-master" deleted
W0814 06:01:36.448] E0814 06:01:36.185839   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:36.449] E0814 06:01:36.329062   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:36.487] E0814 06:01:36.486823   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:36.588] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:01:36.604] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:01:36.905] (Bservice/redis-master created
W0814 06:01:37.006] E0814 06:01:36.627587   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:37.107] core.sh:920: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:01:37.177] (Bcore.sh:924: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0814 06:01:37.375] (Bservice/service-v1-test created
W0814 06:01:37.476] E0814 06:01:37.187753   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:37.477] E0814 06:01:37.331002   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:37.490] E0814 06:01:37.489671   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:37.591] core.sh:945: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 06:01:37.707] (Bservice/service-v1-test replaced
W0814 06:01:37.808] E0814 06:01:37.629359   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:37.909] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0814 06:01:37.939] (Bservice "redis-master" deleted
I0814 06:01:38.048] service "service-v1-test" deleted
I0814 06:01:38.164] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:01:38.278] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0814 06:01:38.479] (Bservice/redis-master created
W0814 06:01:38.580] E0814 06:01:38.189243   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:38.580] E0814 06:01:38.332678   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:38.581] E0814 06:01:38.492319   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:38.631] E0814 06:01:38.630768   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0814 06:01:38.734] service/redis-slave created
I0814 06:01:38.826] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 06:01:38.939] (BSuccessful
I0814 06:01:38.939] message:NAME           RSRC
I0814 06:01:38.940] kubernetes     144
I0814 06:01:38.940] redis-master   1430
I0814 06:01:38.940] redis-slave    1433
I0814 06:01:38.940] has:redis-master
I0814 06:01:39.052] core.sh:979: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0814 06:01:39.170] (Bservice "redis-master" deleted
I0814 06:01:39.181] service "redis-slave" deleted
W0814 06:01:39.282] E0814 06:01:39.191323   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0814 06:01:39.335] E0814 06:01:39.334545   53051 reflector.go:125] k8s.io/client-go/metadata/metadatainformer/informer.go:89: