This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 665 succeeded
Started2019-03-20 07:11
Elapsed26m15s
Revision
Buildergke-prow-containerd-pool-99179761-hqtl
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/15380a5d-0efe-49b4-9bf9-2471c9208c80/targets/test'}}
poded9e3e60-4ade-11e9-ab9f-0a580a6c0a8e
resultstorehttps://source.cloud.google.com/results/invocations/15380a5d-0efe-49b4-9bf9-2471c9208c80/targets/test
infra-commit3931105de
poded9e3e60-4ade-11e9-ab9f-0a580a6c0a8e
repok8s.io/kubernetes
repo-commit6f9bf5fe98bcc3b436fea4d6dd345a1502d20778
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 27s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0320 07:29:18.830861  105913 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0320 07:29:18.830894  105913 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0320 07:29:18.830904  105913 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0320 07:29:18.830912  105913 master.go:233] Using reconciler: 
I0320 07:29:18.832254  105913 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.832359  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.832374  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.832417  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.832467  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.832789  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.832859  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.832921  105913 store.go:1319] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0320 07:29:18.832959  105913 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.833003  105913 reflector.go:161] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0320 07:29:18.833172  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.833222  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.833265  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.833329  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.833867  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.833892  105913 store.go:1319] Monitoring events count at <storage-prefix>//events
I0320 07:29:18.833896  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.833912  105913 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.834005  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.834020  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.834068  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.834172  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.834658  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.834781  105913 store.go:1319] Monitoring limitranges count at <storage-prefix>//limitranges
I0320 07:29:18.834802  105913 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.834847  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.834854  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.834872  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.834904  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.834922  105913 reflector.go:161] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0320 07:29:18.835036  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.835238  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.835332  105913 store.go:1319] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0320 07:29:18.835685  105913 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.835799  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.835809  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.835849  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.835883  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.835904  105913 reflector.go:161] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0320 07:29:18.836044  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.836527  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.836614  105913 store.go:1319] Monitoring secrets count at <storage-prefix>//secrets
I0320 07:29:18.836761  105913 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.836840  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.836850  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.836876  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.836931  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.836996  105913 reflector.go:161] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0320 07:29:18.837135  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.839735  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.839953  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.840139  105913 store.go:1319] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0320 07:29:18.840214  105913 reflector.go:161] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0320 07:29:18.840378  105913 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.840499  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.840549  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.840629  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.840735  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.841410  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.841483  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.841503  105913 store.go:1319] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0320 07:29:18.841522  105913 reflector.go:161] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0320 07:29:18.841640  105913 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.841730  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.841750  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.841784  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.841824  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.842050  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.842110  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.842154  105913 store.go:1319] Monitoring configmaps count at <storage-prefix>//configmaps
I0320 07:29:18.842206  105913 reflector.go:161] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0320 07:29:18.842296  105913 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.842359  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.842373  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.842632  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.842680  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.842950  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.843036  105913 store.go:1319] Monitoring namespaces count at <storage-prefix>//namespaces
I0320 07:29:18.843197  105913 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.843246  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.843255  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.843280  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.843333  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.843354  105913 reflector.go:161] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0320 07:29:18.843509  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.843759  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.843866  105913 store.go:1319] Monitoring endpoints count at <storage-prefix>//endpoints
I0320 07:29:18.844030  105913 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.844100  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.844109  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.844134  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.844175  105913 reflector.go:161] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0320 07:29:18.844250  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.844303  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.844564  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.844694  105913 store.go:1319] Monitoring nodes count at <storage-prefix>//nodes
I0320 07:29:18.844839  105913 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.844911  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.844921  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.844955  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.844997  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.845019  105913 reflector.go:161] Listing and watching *core.Node from storage/cacher.go:/nodes
I0320 07:29:18.845226  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.845518  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.845866  105913 store.go:1319] Monitoring pods count at <storage-prefix>//pods
I0320 07:29:18.845886  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.845910  105913 reflector.go:161] Listing and watching *core.Pod from storage/cacher.go:/pods
I0320 07:29:18.846047  105913 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.846145  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.846168  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.846192  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.846283  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.846479  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.846580  105913 store.go:1319] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0320 07:29:18.846751  105913 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.846832  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.846844  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.846873  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.846878  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.846899  105913 reflector.go:161] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0320 07:29:18.846964  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.847310  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.847404  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.847426  105913 store.go:1319] Monitoring services count at <storage-prefix>//services
I0320 07:29:18.847447  105913 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.847511  105913 reflector.go:161] Listing and watching *core.Service from storage/cacher.go:/services
I0320 07:29:18.847533  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.847541  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.847564  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.847668  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.848037  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.848130  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.848164  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.848178  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.848208  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.848241  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.849113  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.849248  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.849310  105913 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.849375  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.849399  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.849440  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.849485  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.849738  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.849764  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.849836  105913 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0320 07:29:18.849933  105913 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0320 07:29:18.865102  105913 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0320 07:29:18.865136  105913 master.go:425] Enabling API group "authentication.k8s.io".
I0320 07:29:18.865176  105913 master.go:425] Enabling API group "authorization.k8s.io".
I0320 07:29:18.865336  105913 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.865434  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.865463  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.865516  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.865585  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.866258  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.866325  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.866424  105913 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0320 07:29:18.866471  105913 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0320 07:29:18.866584  105913 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.866676  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.866691  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.866721  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.866785  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.867016  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.867138  105913 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0320 07:29:18.867195  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.867225  105913 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0320 07:29:18.867486  105913 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.867566  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.867593  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.867624  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.867715  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.868030  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.868058  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.868167  105913 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0320 07:29:18.868182  105913 master.go:425] Enabling API group "autoscaling".
I0320 07:29:18.868194  105913 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0320 07:29:18.868342  105913 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.868413  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.868422  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.868456  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.868523  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.868759  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.868801  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.868898  105913 store.go:1319] Monitoring jobs.batch count at <storage-prefix>//jobs
I0320 07:29:18.868961  105913 reflector.go:161] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0320 07:29:18.869103  105913 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.869190  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.869205  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.869238  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.869320  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.869623  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.869673  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.869771  105913 store.go:1319] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0320 07:29:18.869798  105913 master.go:425] Enabling API group "batch".
I0320 07:29:18.869860  105913 reflector.go:161] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0320 07:29:18.869998  105913 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.870167  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.870192  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.870223  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.870294  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.870608  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.870684  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.870739  105913 store.go:1319] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0320 07:29:18.870763  105913 master.go:425] Enabling API group "certificates.k8s.io".
I0320 07:29:18.870814  105913 reflector.go:161] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0320 07:29:18.870935  105913 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.870989  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.870997  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.871041  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.871089  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.871889  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.871987  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.872121  105913 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0320 07:29:18.872167  105913 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0320 07:29:18.872625  105913 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.872761  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.872782  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.872813  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.872860  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.873061  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.873164  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.873255  105913 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0320 07:29:18.873270  105913 master.go:425] Enabling API group "coordination.k8s.io".
I0320 07:29:18.873310  105913 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0320 07:29:18.873457  105913 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.873557  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.873571  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.873644  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.873721  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.873945  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.874011  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.874039  105913 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0320 07:29:18.874108  105913 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0320 07:29:18.874220  105913 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.874297  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.874336  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.874407  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.874464  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.874692  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.874759  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.874910  105913 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0320 07:29:18.875005  105913 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0320 07:29:18.875089  105913 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.875357  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.875403  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.875483  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.875769  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.875981  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.876149  105913 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 07:29:18.876186  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.876273  105913 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 07:29:18.876330  105913 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.876385  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.876405  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.876430  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.876490  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.876719  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.876827  105913 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0320 07:29:18.876969  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.876982  105913 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.877054  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.877072  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.877158  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.877234  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.877502  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.877588  105913 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0320 07:29:18.877630  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.877723  105913 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0320 07:29:18.877016  105913 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0320 07:29:18.877900  105913 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.877983  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.877991  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.878016  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.878102  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.880195  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.880226  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.880293  105913 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0320 07:29:18.880360  105913 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0320 07:29:18.880467  105913 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.880567  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.880581  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.880602  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.880757  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.881016  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.881115  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.881149  105913 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0320 07:29:18.881167  105913 master.go:425] Enabling API group "extensions".
I0320 07:29:18.881195  105913 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0320 07:29:18.881351  105913 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.881427  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.881442  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.881470  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.881528  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.881762  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.881845  105913 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0320 07:29:18.881992  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.882022  105913 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0320 07:29:18.882059  105913 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.882150  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.882164  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.882197  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.882351  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.882605  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.882678  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.882738  105913 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0320 07:29:18.882756  105913 master.go:425] Enabling API group "networking.k8s.io".
I0320 07:29:18.882776  105913 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0320 07:29:18.882788  105913 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.882896  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.882909  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.882957  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.883010  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.883746  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.883855  105913 store.go:1319] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0320 07:29:18.883867  105913 master.go:425] Enabling API group "node.k8s.io".
I0320 07:29:18.883919  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.883974  105913 reflector.go:161] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0320 07:29:18.884226  105913 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.884314  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.884330  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.884364  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.884899  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.885249  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.885311  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.885359  105913 store.go:1319] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0320 07:29:18.885447  105913 reflector.go:161] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0320 07:29:18.885513  105913 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.885583  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.885593  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.885625  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.885694  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.885957  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.886147  105913 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0320 07:29:18.886169  105913 master.go:425] Enabling API group "policy".
I0320 07:29:18.886218  105913 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.886257  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.886288  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.886306  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.886334  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.886334  105913 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0320 07:29:18.886461  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.886714  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.886824  105913 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0320 07:29:18.886846  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.886958  105913 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0320 07:29:18.887007  105913 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.887096  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.887108  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.887136  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.887317  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.887535  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.887632  105913 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0320 07:29:18.887662  105913 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.887687  105913 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0320 07:29:18.887754  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.887769  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.887800  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.887920  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.888045  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.888175  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.888237  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.888250  105913 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0320 07:29:18.888272  105913 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0320 07:29:18.888414  105913 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.888472  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.888487  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.888540  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.888593  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.888793  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.889029  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.889154  105913 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0320 07:29:18.889207  105913 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.889279  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.889297  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.889365  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.889484  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.889525  105913 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0320 07:29:18.889999  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.890057  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.890117  105913 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0320 07:29:18.890145  105913 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0320 07:29:18.890309  105913 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.890384  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.890409  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.890433  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.890507  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.890795  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.890861  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.890891  105913 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0320 07:29:18.890918  105913 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.890951  105913 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0320 07:29:18.890987  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.890998  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.891028  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.891121  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.891378  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.891412  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.891487  105913 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0320 07:29:18.891674  105913 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0320 07:29:18.891679  105913 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.891745  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.891754  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.891781  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.891818  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.892112  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.892213  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.892383  105913 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0320 07:29:18.892460  105913 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0320 07:29:18.892462  105913 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0320 07:29:18.906360  105913 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.906740  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.906752  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.906882  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.907277  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.913210  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.913746  105913 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0320 07:29:18.914281  105913 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.913765  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.914887  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.914899  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.915042  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.915499  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.919497  105913 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0320 07:29:18.920249  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.920587  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.920769  105913 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0320 07:29:18.920788  105913 master.go:425] Enabling API group "scheduling.k8s.io".
I0320 07:29:18.921040  105913 master.go:417] Skipping disabled API group "settings.k8s.io".
I0320 07:29:18.921248  105913 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0320 07:29:18.923038  105913 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.923331  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.923353  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.923512  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.923996  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.927566  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.927726  105913 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0320 07:29:18.927918  105913 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.928010  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.928026  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.928108  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.928195  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.928225  105913 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0320 07:29:18.928570  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.930796  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.930922  105913 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0320 07:29:18.930970  105913 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.931051  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.931072  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.931119  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.931192  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.931241  105913 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0320 07:29:18.931419  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.931715  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.931856  105913 store.go:1319] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0320 07:29:18.931894  105913 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.932005  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.932021  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.932124  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.932418  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.932455  105913 reflector.go:161] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0320 07:29:18.932892  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.934414  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.934563  105913 store.go:1319] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0320 07:29:18.934622  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.934783  105913 reflector.go:161] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0320 07:29:18.934859  105913 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.934959  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.934975  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.935044  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.935180  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.937031  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.937159  105913 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0320 07:29:18.937286  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.937381  105913 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.937490  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.937509  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.937545  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.937616  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.937804  105913 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0320 07:29:18.938411  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.940363  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.940552  105913 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0320 07:29:18.940615  105913 master.go:425] Enabling API group "storage.k8s.io".
I0320 07:29:18.940719  105913 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0320 07:29:18.940906  105913 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.941055  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.941099  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.941215  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.941292  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.941601  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.941844  105913 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 07:29:18.942131  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.942191  105913 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.942271  105913 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 07:29:18.942348  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.942369  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.942424  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.942514  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.943017  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.943460  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.943590  105913 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0320 07:29:18.943733  105913 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0320 07:29:18.943977  105913 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.944139  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.944162  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.944231  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.944579  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.945777  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.946113  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.946261  105913 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0320 07:29:18.946346  105913 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0320 07:29:18.946753  105913 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.946839  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.946849  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.946883  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.947325  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.948418  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.948563  105913 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 07:29:18.948596  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.948916  105913 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 07:29:18.948976  105913 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.949097  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.949108  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.949143  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.949257  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.950699  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.950839  105913 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0320 07:29:18.951136  105913 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.951283  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.951295  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.951333  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.951377  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.951421  105913 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0320 07:29:18.963209  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.964152  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.964203  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.964320  105913 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0320 07:29:18.964371  105913 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0320 07:29:18.964515  105913 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.964628  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.964643  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.964698  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.964791  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.965001  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.965297  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.965348  105913 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0320 07:29:18.965424  105913 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0320 07:29:18.965618  105913 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.965718  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.965740  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.965817  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.965881  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.968160  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.968246  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.968270  105913 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0320 07:29:18.968287  105913 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0320 07:29:18.968510  105913 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.968612  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.968682  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.968735  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.968820  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.969309  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.969412  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.969505  105913 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 07:29:18.969542  105913 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 07:29:18.970429  105913 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.970534  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.970551  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.970599  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.970661  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.970912  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.970991  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.971028  105913 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0320 07:29:18.971054  105913 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0320 07:29:18.971639  105913 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.971718  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.971735  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.971765  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.971819  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.972041  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.972128  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.972275  105913 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0320 07:29:18.972418  105913 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.972529  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.972543  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.972570  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.972615  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.972616  105913 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0320 07:29:18.972818  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.972929  105913 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0320 07:29:18.972954  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.972998  105913 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0320 07:29:18.973060  105913 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.973137  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.973164  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.973207  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.973342  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.973645  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.973756  105913 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0320 07:29:18.973777  105913 master.go:425] Enabling API group "apps".
I0320 07:29:18.973804  105913 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.973869  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.973885  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.973891  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.973966  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.974032  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.974069  105913 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0320 07:29:18.974265  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.974444  105913 store.go:1319] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0320 07:29:18.974511  105913 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.974533  105913 reflector.go:161] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0320 07:29:18.974596  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.974619  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.974658  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.974467  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.974782  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.975092  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.975155  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.975209  105913 store.go:1319] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0320 07:29:18.975226  105913 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0320 07:29:18.975257  105913 reflector.go:161] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0320 07:29:18.975249  105913 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"90d36f6b-b65e-4ad8-9c46-dbe9e3fcccd3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 07:29:18.975490  105913 client.go:352] parsed scheme: ""
I0320 07:29:18.975507  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:18.975535  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:18.975589  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:18.975910  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:18.975937  105913 store.go:1319] Monitoring events count at <storage-prefix>//events
I0320 07:29:18.975948  105913 master.go:425] Enabling API group "events.k8s.io".
I0320 07:29:18.977069  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 07:29:18.981992  105913 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
W0320 07:29:18.990348  105913 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0320 07:29:18.996008  105913 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0320 07:29:18.997017  105913 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0320 07:29:18.999773  105913 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0320 07:29:19.014544  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.014569  105913 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0320 07:29:19.014579  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.014587  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.014600  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.014753  105913 wrap.go:47] GET /healthz: (322.309µs) 500
goroutine 29393 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012775110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012775110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092e62a0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc012178278, 0xc00005e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc012178278, 0xc01127a800)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc012178278, 0xc01127a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006402e40, 0xc00f381720, 0x75f71a0, 0xc012178278, 0xc01127a800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:19.015603  105913 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.022485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40980]
I0320 07:29:19.017647  105913 wrap.go:47] GET /api/v1/services: (871.821µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40980]
I0320 07:29:19.020740  105913 wrap.go:47] GET /api/v1/services: (831.744µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40980]
I0320 07:29:19.022513  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.022541  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.022565  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.022578  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.022725  105913 wrap.go:47] GET /healthz: (280.08µs) 500
goroutine 29466 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011247180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011247180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a318820, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e410160, 0xc009604180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88100)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e410160, 0xc005b88100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1e08a0, 0xc00f381720, 0x75f71a0, 0xc00e410160, 0xc005b88100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.023955  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.618159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40980]
I0320 07:29:19.023984  105913 wrap.go:47] GET /api/v1/services: (838.294µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40984]
I0320 07:29:19.025008  105913 wrap.go:47] GET /api/v1/services: (1.881466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.025690  105913 wrap.go:47] POST /api/v1/namespaces: (1.21062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40984]
I0320 07:29:19.026924  105913 wrap.go:47] GET /api/v1/namespaces/kube-public: (918.121µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.028436  105913 wrap.go:47] POST /api/v1/namespaces: (1.21369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.029453  105913 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (731.875µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.030913  105913 wrap.go:47] POST /api/v1/namespaces: (1.205127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.115489  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.115550  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.115565  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.115574  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.115710  105913 wrap.go:47] GET /healthz: (338.202µs) 500
goroutine 29558 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002890f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002890f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a2ab6c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00ba2a368, 0xc010374900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17b00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00ba2a368, 0xc005b17b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1cc660, 0xc00f381720, 0x75f71a0, 0xc00ba2a368, 0xc005b17b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:19.123929  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.123964  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.123975  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.123983  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.124144  105913 wrap.go:47] GET /healthz: (328.716µs) 500
goroutine 29542 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010dd8620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010dd8620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0082b1060, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0119e2520, 0xc00936a180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc005802000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0119e2520, 0xc00a5dff00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0119e2520, 0xc00a5dff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e2722a0, 0xc00f381720, 0x75f71a0, 0xc0119e2520, 0xc00a5dff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.215495  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.215530  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.215541  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.215549  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.215712  105913 wrap.go:47] GET /healthz: (369.272µs) 500
goroutine 29544 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010dd8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010dd8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0082b12e0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0119e2578, 0xc00936a780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802500)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0119e2578, 0xc005802500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e272720, 0xc00f381720, 0x75f71a0, 0xc0119e2578, 0xc005802500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:19.223554  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.223594  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.223614  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.223622  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.223767  105913 wrap.go:47] GET /healthz: (347.683µs) 500
goroutine 29560 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002890fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002890fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a2ab840, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc010374d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c500)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00ba2a3b0, 0xc00586c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1cd140, 0xc00f381720, 0x75f71a0, 0xc00ba2a3b0, 0xc00586c500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:19.315595  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.315660  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.315671  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.315689  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.315896  105913 wrap.go:47] GET /healthz: (463.732µs) 500
goroutine 29571 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cc4850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cc4850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008052e60, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce8098, 0xc0069f8480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736600)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce8098, 0xc00b736600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b51a8a0, 0xc00f381720, 0x75f71a0, 0xc009ce8098, 0xc00b736600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:19.326271  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.326314  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.326324  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.326332  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.326495  105913 wrap.go:47] GET /healthz: (340.611µs) 500
goroutine 29471 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011247810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011247810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a23c140, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e4102b0, 0xc009605080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89b00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e4102b0, 0xc005b89b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1e15c0, 0xc00f381720, 0x75f71a0, 0xc00e4102b0, 0xc005b89b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:19.415547  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.415587  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.415598  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.415608  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.415756  105913 wrap.go:47] GET /healthz: (361.457µs) 500
goroutine 29473 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002740070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002740070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a23c1e0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e4102b8, 0xc009605500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8000)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e4102b8, 0xc004ea8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1e1680, 0xc00f381720, 0x75f71a0, 0xc00e4102b8, 0xc004ea8000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40982]
I0320 07:29:19.423465  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.423499  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.423510  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.423518  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.423700  105913 wrap.go:47] GET /healthz: (363.324µs) 500
goroutine 29546 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010dd87e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010dd87e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0082b1700, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0119e25f8, 0xc00936af00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802c00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0119e25f8, 0xc005802c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e272c60, 0xc00f381720, 0x75f71a0, 0xc0119e25f8, 0xc005802c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:19.515624  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.515658  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.515668  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.515687  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.515844  105913 wrap.go:47] GET /healthz: (373.304µs) 500
goroutine 29587 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002740230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002740230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a23c280, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e4102c0, 0xc009605980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8400)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e4102c0, 0xc004ea8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1e19e0, 0xc00f381720, 0x75f71a0, 0xc00e4102c0, 0xc004ea8400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40982]
I0320 07:29:19.523550  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.523578  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.523587  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.523595  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.523792  105913 wrap.go:47] GET /healthz: (377.145µs) 500
goroutine 29589 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002740310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002740310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a23c340, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e4102c8, 0xc009605e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8800)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e4102c8, 0xc004ea8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1e1b00, 0xc00f381720, 0x75f71a0, 0xc00e4102c8, 0xc004ea8800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:19.615507  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.615541  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.615551  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.615558  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.615716  105913 wrap.go:47] GET /healthz: (331.115µs) 500
goroutine 29591 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027403f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027403f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a23c440, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e4102f0, 0xc0082e8300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea8f00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e4102f0, 0xc004ea8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1e1f20, 0xc00f381720, 0x75f71a0, 0xc00e4102f0, 0xc004ea8f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40982]
I0320 07:29:19.623612  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.623645  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.623656  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.623672  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.623817  105913 wrap.go:47] GET /healthz: (334.17µs) 500
goroutine 29573 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cc4930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cc4930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0080530e0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce80c0, 0xc0069f8a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736c00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce80c0, 0xc00b736c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b51aa80, 0xc00f381720, 0x75f71a0, 0xc009ce80c0, 0xc00b736c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:19.715472  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.715518  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.715528  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.715537  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.715686  105913 wrap.go:47] GET /healthz: (350.151µs) 500
goroutine 29575 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cc4a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cc4a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc008053380, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce80f0, 0xc0069f9080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737200)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce80f0, 0xc00b737200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b51acc0, 0xc00f381720, 0x75f71a0, 0xc009ce80f0, 0xc00b737200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40982]
I0320 07:29:19.723443  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.723472  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.723482  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.723489  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.723666  105913 wrap.go:47] GET /healthz: (341.496µs) 500
goroutine 29562 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028910a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028910a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a2abce0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00ba2a400, 0xc010375500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586ce00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00ba2a400, 0xc00586ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e1cdda0, 0xc00f381720, 0x75f71a0, 0xc00ba2a400, 0xc00586ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:19.815921  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.815958  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.815970  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.815979  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.816189  105913 wrap.go:47] GET /healthz: (401.778µs) 500
goroutine 29593 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002740540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002740540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a23c880, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e410318, 0xc0082e8a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9c00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e410318, 0xc004ea9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e0e89c0, 0xc00f381720, 0x75f71a0, 0xc00e410318, 0xc004ea9c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40982]
I0320 07:29:19.823467  105913 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 07:29:19.823493  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.823502  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.823509  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.823605  105913 wrap.go:47] GET /healthz: (264.501µs) 500
goroutine 29564 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002891180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002891180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a226040, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00ba2a448, 0xc010375b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca100)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00ba2a448, 0xc0041ca100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e0c8420, 0xc00f381720, 0x75f71a0, 0xc00ba2a448, 0xc0041ca100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:19.830685  105913 client.go:352] parsed scheme: ""
I0320 07:29:19.830708  105913 client.go:352] scheme "" not registered, fallback to default scheme
I0320 07:29:19.830749  105913 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 07:29:19.830801  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:19.831186  105913 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 07:29:19.831223  105913 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 07:29:19.916533  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.916569  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.916577  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.916741  105913 wrap.go:47] GET /healthz: (1.315229ms) 500
goroutine 29500 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00274c380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00274c380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a33d4c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f840e8, 0xc004eac580, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509900)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f840e8, 0xc006509900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e305e00, 0xc00f381720, 0x75f71a0, 0xc009f840e8, 0xc006509900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40982]
I0320 07:29:19.924181  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:19.924211  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:19.924220  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:19.924372  105913 wrap.go:47] GET /healthz: (985.1µs) 500
goroutine 29609 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028913b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028913b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a226800, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc001fc0160, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca600)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00ba2a4a8, 0xc0041ca600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e0c9c80, 0xc00f381720, 0x75f71a0, 0xc00ba2a4a8, 0xc0041ca600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:20.016377  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40994]
I0320 07:29:20.016489  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.849947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.016725  105913 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.083647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:20.017624  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.017644  105913 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 07:29:20.017652  105913 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 07:29:20.017777  105913 wrap.go:47] GET /healthz: (1.139808ms) 500
goroutine 29614 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028919d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028919d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a227560, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00ba2a528, 0xc001fc09a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cb800)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00ba2a528, 0xc0041cb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dffeea0, 0xc00f381720, 0x75f71a0, 0xc00ba2a528, 0xc0041cb800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:20.018706  105913 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.55258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.018712  105913 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.476045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40982]
I0320 07:29:20.018866  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.094788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0320 07:29:20.018894  105913 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
I0320 07:29:20.019796  105913 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (681.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.020197  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.057313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.020286  105913 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.207877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0320 07:29:20.021450  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (972.684µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0320 07:29:20.021626  105913 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.382784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.021829  105913 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
I0320 07:29:20.021848  105913 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0320 07:29:20.022453  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (695.146µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40998]
I0320 07:29:20.023447  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (647.788µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.024310  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.024404  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (736.292µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.024457  105913 wrap.go:47] GET /healthz: (1.044835ms) 500
goroutine 29652 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b98fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b98fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a208700, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4da160, 0xc0038a03c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010cabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4da160, 0xc010caba00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4da160, 0xc010caba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dd28780, 0xc00f381720, 0x75f71a0, 0xc00d4da160, 0xc010caba00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.025565  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (643.159µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.026514  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (618.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.029879  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (734.948µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.031475  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.192508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.031647  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0320 07:29:20.032798  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (924.03µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.034471  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.193865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.034618  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0320 07:29:20.035514  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (728.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.036983  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.059674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.037125  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0320 07:29:20.037978  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (648.657µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.039593  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.263158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.039781  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0320 07:29:20.040746  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (748.29µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.042619  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.418154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.042891  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0320 07:29:20.043854  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (774.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.045485  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.1928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.045673  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0320 07:29:20.046519  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (691.54µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.047887  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.066309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.048051  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0320 07:29:20.048954  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (728.975µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.050573  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.202643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.050895  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0320 07:29:20.051794  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (746.635µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.053682  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.471503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.053928  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0320 07:29:20.054985  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (836.393µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.056803  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.429537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.057127  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0320 07:29:20.058006  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (694.982µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.059594  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.059866  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0320 07:29:20.061001  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (820.546µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.063265  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.821278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.064003  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0320 07:29:20.064885  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (662.728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.066684  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.333232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.066854  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0320 07:29:20.067794  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (697.657µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.069335  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.132235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.069507  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0320 07:29:20.070317  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (658.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.071768  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.164928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.071959  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0320 07:29:20.072849  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (715.615µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.074369  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.221867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.074529  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0320 07:29:20.075489  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (763.429µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.077190  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.246712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.077379  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0320 07:29:20.078242  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (719.204µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.079839  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.080029  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0320 07:29:20.085816  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (5.587959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.088241  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.376996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.088436  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0320 07:29:20.089286  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (704.969µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.091143  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.448253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.091380  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0320 07:29:20.092307  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (725.384µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.093717  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.116252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.093917  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0320 07:29:20.094856  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (781.12µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.096628  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.352347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.096798  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0320 07:29:20.097681  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (719.907µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.099468  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.099651  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0320 07:29:20.100504  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (672.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.102128  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.252335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.102328  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0320 07:29:20.103207  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (718.8µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.104734  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.187678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.104989  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0320 07:29:20.105978  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (773.792µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.107595  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.209313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.107752  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0320 07:29:20.108730  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (796.845µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.110244  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.128227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.110454  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0320 07:29:20.111369  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (751.45µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.113020  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.306121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.113419  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0320 07:29:20.114252  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (654.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.116234  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.116322  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.731139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.116490  105913 wrap.go:47] GET /healthz: (1.29992ms) 500
goroutine 29713 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027098f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027098f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00932d3a0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc012178b90, 0xc000078c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab100)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc012178b90, 0xc003bab100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d00f500, 0xc00f381720, 0x75f71a0, 0xc012178b90, 0xc003bab100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:20.116509  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0320 07:29:20.117526  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (750.985µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.118916  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.053155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.119054  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0320 07:29:20.119883  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (625.757µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.121509  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.292692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.121683  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0320 07:29:20.122619  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (741.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.124109  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.124289  105913 wrap.go:47] GET /healthz: (990.017µs) 500
goroutine 29816 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002548380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002548380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0041d2a60, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00b9d6140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a800)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0117b5af8, 0xc00593a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cea6300, 0xc00f381720, 0x75f71a0, 0xc0117b5af8, 0xc00593a800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.124300  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.124504  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0320 07:29:20.125497  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (836.326µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.127140  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.206406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.127370  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0320 07:29:20.128269  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (716.057µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.129915  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.309015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.130222  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0320 07:29:20.131252  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (662.159µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.138604  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.138760  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0320 07:29:20.139636  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (742.402µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.141381  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.141642  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0320 07:29:20.142728  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (909.634µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.144504  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.370943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.144718  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0320 07:29:20.145698  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (804.282µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.147457  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.147687  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0320 07:29:20.148635  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (782.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.150409  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.150661  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0320 07:29:20.151983  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.149082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.154122  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.732462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.154348  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0320 07:29:20.155430  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (914.801µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.157103  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.245992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.157338  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0320 07:29:20.158570  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.010569ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.160477  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.461985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.160677  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0320 07:29:20.161592  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (761.916µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.163246  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.303517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.163441  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0320 07:29:20.164298  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (680.788µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.166027  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.325463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.166329  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0320 07:29:20.167334  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (798.531µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.168768  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.090222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.168946  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0320 07:29:20.169875  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (754.492µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.171350  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.146717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.171527  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0320 07:29:20.172478  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (805.765µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.173966  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.180215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.174155  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0320 07:29:20.175038  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (739.066µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.176923  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.447772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.177204  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0320 07:29:20.178099  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (713.97µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.179845  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.382796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.180023  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0320 07:29:20.180903  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (711.981µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.196944  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.08648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.197218  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0320 07:29:20.216155  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.216316  105913 wrap.go:47] GET /healthz: (977.495µs) 500
goroutine 29901 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024d2850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024d2850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028c0c00, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600418, 0xc0038a0780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600418, 0xc007469000)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600418, 0xc007469000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b308420, 0xc00f381720, 0x75f71a0, 0xc00a600418, 0xc007469000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:20.216331  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.445067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.224399  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.224567  105913 wrap.go:47] GET /healthz: (1.223288ms) 500
goroutine 29903 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024d2930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024d2930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028c14c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600450, 0xc00b9d6500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600450, 0xc007469400)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600450, 0xc007469400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b308960, 0xc00f381720, 0x75f71a0, 0xc00a600450, 0xc007469400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.236798  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.000246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.237049  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0320 07:29:20.256472  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.592602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.276977  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.10142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.277277  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0320 07:29:20.296234  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.429289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.316858  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.999022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.317125  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0320 07:29:20.317180  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.317354  105913 wrap.go:47] GET /healthz: (1.440988ms) 500
goroutine 29886 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00230c2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00230c2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002642a00, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f84c08, 0xc000079180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc003aaa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f84c08, 0xc002419e00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f84c08, 0xc002419e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008126f60, 0xc00f381720, 0x75f71a0, 0xc009f84c08, 0xc002419e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:20.324267  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.324462  105913 wrap.go:47] GET /healthz: (1.112858ms) 500
goroutine 29853 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002506c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002506c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0028dd9c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e410dc0, 0xc00c0668c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426800)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e410dc0, 0xc002426800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b527380, 0xc00f381720, 0x75f71a0, 0xc00e410dc0, 0xc002426800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.335702  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (943.173µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.356524  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.356761  105913 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0320 07:29:20.375898  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.06374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.398301  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.424031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.398546  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0320 07:29:20.417311  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.524336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.418262  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.418462  105913 wrap.go:47] GET /healthz: (1.665539ms) 500
goroutine 29926 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024d3650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024d3650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00113f1e0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600748, 0xc0038a0dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ee00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600748, 0xc003b3ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009d63d40, 0xc00f381720, 0x75f71a0, 0xc00a600748, 0xc003b3ee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:20.424138  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.424293  105913 wrap.go:47] GET /healthz: (975.115µs) 500
goroutine 29954 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00230ccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00230ccb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002409f60, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f84d08, 0xc00b9d6b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aabb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aab900)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f84d08, 0xc003aab900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008127740, 0xc00f381720, 0x75f71a0, 0xc009f84d08, 0xc003aab900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.436696  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.98109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.436900  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0320 07:29:20.455873  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.105013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.476723  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.965252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.476985  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0320 07:29:20.496163  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.161357ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.516432  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.516614  105913 wrap.go:47] GET /healthz: (1.336018ms) 500
goroutine 29933 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002304770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002304770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000428100, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a6009b0, 0xc00b9d6f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054400)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a6009b0, 0xc004054400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a5ee360, 0xc00f381720, 0x75f71a0, 0xc00a6009b0, 0xc004054400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:20.516743  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.971127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.516971  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0320 07:29:20.524314  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.524528  105913 wrap.go:47] GET /healthz: (1.112547ms) 500
goroutine 29959 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00230d110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00230d110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000b56340, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f84de0, 0xc00b9d72c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2a900)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f84de0, 0xc003e2a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008127d40, 0xc00f381720, 0x75f71a0, 0xc009f84de0, 0xc003e2a900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.535739  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (990.882µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.556660  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.787152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.556857  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0320 07:29:20.575957  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.178863ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.596528  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.694471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.596725  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0320 07:29:20.629304  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.629492  105913 wrap.go:47] GET /healthz: (14.217127ms) 500
goroutine 29987 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0025a69a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0025a69a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092fad40, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4daca0, 0xc003a54140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674900)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4daca0, 0xc005674900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a47a480, 0xc00f381720, 0x75f71a0, 0xc00d4daca0, 0xc005674900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40978]
I0320 07:29:20.630568  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (12.380051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.630575  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.630719  105913 wrap.go:47] GET /healthz: (965.782µs) 500
goroutine 29990 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0025a6a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0025a6a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092fb120, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc00c174780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675000)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4dacd0, 0xc005675000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a47b2c0, 0xc00f381720, 0x75f71a0, 0xc00d4dacd0, 0xc005675000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40978]
I0320 07:29:20.638232  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.920578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.638439  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0320 07:29:20.658580  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (3.436476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.678954  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.144803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.679213  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0320 07:29:20.696767  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (2.044527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.716516  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.716705  105913 wrap.go:47] GET /healthz: (1.398147ms) 500
goroutine 29935 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002304850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002304850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001d50840, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a6009f8, 0xc00c174c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004055200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004054e00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a6009f8, 0xc004054e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a5ef980, 0xc00f381720, 0x75f71a0, 0xc00a6009f8, 0xc004054e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:20.716825  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.98825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.717026  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0320 07:29:20.724151  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.724340  105913 wrap.go:47] GET /healthz: (1.025028ms) 500
goroutine 29583 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cc5730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cc5730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00a18b540, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce82d0, 0xc00317e3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bb00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce82d0, 0xc006f1bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b51bc20, 0xc00f381720, 0x75f71a0, 0xc009ce82d0, 0xc006f1bb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:20.735787  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.059329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:20.756522  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.691138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:20.756794  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0320 07:29:20.775881  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.098278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:20.796779  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:20.796984  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0320 07:29:20.815944  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.816130  105913 wrap.go:47] GET /healthz: (841.5µs) 500
goroutine 30018 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0025a7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0025a7d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0020ea520, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4daef0, 0xc00fda6780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274a00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4daef0, 0xc008274a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002013e60, 0xc00f381720, 0x75f71a0, 0xc00d4daef0, 0xc008274a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:20.816170  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.392137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:20.824117  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.824325  105913 wrap.go:47] GET /healthz: (975.835µs) 500
goroutine 29841 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024e6e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024e6e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000d69a40, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0121791d8, 0xc00b9d7900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d200)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0121791d8, 0xc00816d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b1548a0, 0xc00f381720, 0x75f71a0, 0xc0121791d8, 0xc00816d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.836760  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.010466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.836949  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0320 07:29:20.855702  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (995.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.876509  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.746215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.876754  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0320 07:29:20.896297  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.463103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.916338  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.916539  105913 wrap.go:47] GET /healthz: (1.28726ms) 500
goroutine 30038 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0024e7490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0024e7490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0030baae0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0121792a8, 0xc00317e8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816db00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0121792a8, 0xc00816db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b154de0, 0xc00f381720, 0x75f71a0, 0xc0121792a8, 0xc00816db00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:20.916826  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.042596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.917139  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0320 07:29:20.924123  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:20.924307  105913 wrap.go:47] GET /healthz: (985.411µs) 500
goroutine 30025 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022dca80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022dca80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00311f100, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4db098, 0xc00c067040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6700)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4db098, 0xc0096e6700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006a797a0, 0xc00f381720, 0x75f71a0, 0xc00d4db098, 0xc0096e6700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.935668  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (892.867µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.956415  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.608347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.956610  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0320 07:29:20.975903  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.093256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.996724  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:20.996923  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0320 07:29:21.016160  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.361759ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.017044  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.017270  105913 wrap.go:47] GET /healthz: (1.31126ms) 500
goroutine 30032 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022dd2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022dd2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003214740, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4db238, 0xc00fda6dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7a00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4db238, 0xc0096e7a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0020d9a40, 0xc00f381720, 0x75f71a0, 0xc00d4db238, 0xc0096e7a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:21.024094  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.024251  105913 wrap.go:47] GET /healthz: (924.274µs) 500
goroutine 30050 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022dd3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022dd3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003214d40, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4db250, 0xc00317ec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0068be000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4db250, 0xc0096e7f00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4db250, 0xc0096e7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0010e33e0, 0xc00f381720, 0x75f71a0, 0xc00d4db250, 0xc0096e7f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.036691  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.927519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.036872  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0320 07:29:21.057336  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.559459ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.077056  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.242641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.077332  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0320 07:29:21.095901  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.112773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.116289  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.116498  105913 wrap.go:47] GET /healthz: (1.248173ms) 500
goroutine 30054 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022dd7a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022dd7a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00334e100, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4db358, 0xc00c067540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bee00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4db358, 0xc0068bee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004b3e180, 0xc00f381720, 0x75f71a0, 0xc00d4db358, 0xc0068bee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:21.116777  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.973008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.116907  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0320 07:29:21.124202  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.124369  105913 wrap.go:47] GET /healthz: (1.027492ms) 500
goroutine 29937 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002304cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002304cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001d51e80, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600a90, 0xc00c067a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055b00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600a90, 0xc004055b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002285200, 0xc00f381720, 0x75f71a0, 0xc00a600a90, 0xc004055b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.135878  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.079983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.156781  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.983302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.156945  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0320 07:29:21.176504  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.707821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.196867  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.12226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.197147  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0320 07:29:21.215992  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.221919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.220368  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.220560  105913 wrap.go:47] GET /healthz: (4.510283ms) 500
goroutine 30109 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002305340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002305340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00348c3c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600b60, 0xc00b9d7e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc3000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc2f00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600b60, 0xc007fc2f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002c70180, 0xc00f381720, 0x75f71a0, 0xc00a600b60, 0xc007fc2f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:21.223966  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.224182  105913 wrap.go:47] GET /healthz: (946.708µs) 500
goroutine 30116 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022bd2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022bd2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0034810e0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f853d0, 0xc00c067e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfc00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f853d0, 0xc0075bfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002a429c0, 0xc00f381720, 0x75f71a0, 0xc009f853d0, 0xc0075bfc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.236521  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.753093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.236728  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0320 07:29:21.256006  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.187903ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.276853  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.00944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.277090  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0320 07:29:21.296286  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.381177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.316293  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.316495  105913 wrap.go:47] GET /healthz: (1.186898ms) 500
goroutine 30162 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022d6850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022d6850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003584ee0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce8558, 0xc00fda7540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623c00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce8558, 0xc009623c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001b2c540, 0xc00f381720, 0x75f71a0, 0xc009ce8558, 0xc009623c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:21.317265  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.371539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.317499  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0320 07:29:21.324253  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.324421  105913 wrap.go:47] GET /healthz: (1.066912ms) 500
goroutine 30178 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022bdf80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022bdf80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00360f2c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f85538, 0xc000323040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f85538, 0xc008165200)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f85538, 0xc008165200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001efb6e0, 0xc00f381720, 0x75f71a0, 0xc009f85538, 0xc008165200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.335874  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.095962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.357000  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.133976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.357322  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0320 07:29:21.376036  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.160466ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.396835  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.962405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.397120  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0320 07:29:21.415902  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.043424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.416295  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.416473  105913 wrap.go:47] GET /healthz: (1.211798ms) 500
goroutine 30194 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022721c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022721c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0035e1dc0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e411808, 0xc00317f040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9400)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e411808, 0xc0092c9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc001280e40, 0xc00f381720, 0x75f71a0, 0xc00e411808, 0xc0092c9400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:21.424218  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.424382  105913 wrap.go:47] GET /healthz: (1.021251ms) 500
goroutine 30196 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022725b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022725b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0036f6000, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e411818, 0xc003a54f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9c00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e411818, 0xc0092c9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0040d0e40, 0xc00f381720, 0x75f71a0, 0xc00e411818, 0xc0092c9c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.436473  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.683393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.436720  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0320 07:29:21.456019  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.208077ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.482325  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.445831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.482583  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0320 07:29:21.496177  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.362211ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.516524  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.516692  105913 wrap.go:47] GET /healthz: (1.391223ms) 500
goroutine 30111 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002305570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002305570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00348c980, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600bf0, 0xc00317f680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3a00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600bf0, 0xc007fc3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00594c600, 0xc00f381720, 0x75f71a0, 0xc00a600bf0, 0xc007fc3a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:21.517007  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.154546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.517456  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0320 07:29:21.524160  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.524344  105913 wrap.go:47] GET /healthz: (997.465µs) 500
goroutine 30139 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022b15e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022b15e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037343c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc012179780, 0xc00317fa40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc012179780, 0xc01288c400)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc012179780, 0xc01288c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b1c180, 0xc00f381720, 0x75f71a0, 0xc012179780, 0xc01288c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.535924  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.140925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.556718  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.882707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.556924  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0320 07:29:21.581613  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (6.615101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.596531  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.737155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.596880  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0320 07:29:21.615685  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (919.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.616039  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.616229  105913 wrap.go:47] GET /healthz: (951.866µs) 500
goroutine 30227 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0023059d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0023059d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00348d580, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a600c50, 0xc00271a3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b000)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a600c50, 0xc01260b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00585fe00, 0xc00f381720, 0x75f71a0, 0xc00a600c50, 0xc01260b000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:21.624058  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.624252  105913 wrap.go:47] GET /healthz: (891.769µs) 500
goroutine 30216 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002264850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002264850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003715740, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc003a55540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01700)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4dbc20, 0xc011e01700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0059c98c0, 0xc00f381720, 0x75f71a0, 0xc00d4dbc20, 0xc011e01700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.636641  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.817972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.636850  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0320 07:29:21.655858  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.066931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.676595  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.789842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.676824  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0320 07:29:21.696691  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.131635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.717057  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.301961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.717314  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0320 07:29:21.718244  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.718419  105913 wrap.go:47] GET /healthz: (2.408681ms) 500
goroutine 30208 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002273f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002273f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0037a9d40, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e411a38, 0xc00271a780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160ba00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e411a38, 0xc01160ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003d73bc0, 0xc00f381720, 0x75f71a0, 0xc00e411a38, 0xc01160ba00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:21.724035  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.724247  105913 wrap.go:47] GET /healthz: (896.801µs) 500
goroutine 30185 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002280c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002280c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003656a80, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009f85620, 0xc000079900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6e00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009f85620, 0xc0093e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004268000, 0xc00f381720, 0x75f71a0, 0xc009f85620, 0xc0093e6e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.735826  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.048942ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.756588  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.744389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.756834  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0320 07:29:21.776170  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.370834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.796770  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.965503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.796996  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0320 07:29:21.815779  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (988.225µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:21.816592  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.816771  105913 wrap.go:47] GET /healthz: (1.489922ms) 500
goroutine 30258 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022524d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022524d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0038c0900, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e411a88, 0xc00271ac80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee000)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e411a88, 0xc0094ee000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004f32060, 0xc00f381720, 0x75f71a0, 0xc00e411a88, 0xc0094ee000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:21.824172  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.824333  105913 wrap.go:47] GET /healthz: (944.089µs) 500
goroutine 30169 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022d7880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022d7880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00373f4c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce8808, 0xc00271b540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222500)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce8808, 0xc011222500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003588900, 0xc00f381720, 0x75f71a0, 0xc009ce8808, 0xc011222500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.836328  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.553628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.836532  105913 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0320 07:29:21.855785  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.05101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.857450  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.183361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.876449  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.590174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.876701  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0320 07:29:21.895786  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.01342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.897386  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.148824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.916166  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.916329  105913 wrap.go:47] GET /healthz: (1.064447ms) 500
goroutine 30222 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002265c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002265c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0039bacc0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc00fda7b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5e00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4dbde0, 0xc0112a5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0032655c0, 0xc00f381720, 0x75f71a0, 0xc00d4dbde0, 0xc0112a5e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:21.916636  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.842136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.916830  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0320 07:29:21.924103  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:21.924332  105913 wrap.go:47] GET /healthz: (985.8µs) 500
goroutine 30268 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0022539d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0022539d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003abde80, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e411cc0, 0xc0010fec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566600)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e411cc0, 0xc009566600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00593fce0, 0xc00f381720, 0x75f71a0, 0xc00e411cc0, 0xc009566600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.935850  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.054498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.937289  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.054179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.956504  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.677659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.956709  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0320 07:29:21.975782  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.03654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.977281  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.082249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.996764  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.971948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:21.996955  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0320 07:29:22.015973  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.116045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.016120  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.016328  105913 wrap.go:47] GET /healthz: (1.020221ms) 500
goroutine 30306 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00213e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00213e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003b38300, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc003a55a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b100)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00d4dbf18, 0xc00956b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006533800, 0xc00f381720, 0x75f71a0, 0xc00d4dbf18, 0xc00956b100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:22.017488  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.071588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.024050  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.024245  105913 wrap.go:47] GET /healthz: (850.842µs) 500
goroutine 30323 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00210e3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00210e3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c0b1c0, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce8a38, 0xc00271bb80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3400)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce8a38, 0xc0095a3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00662b080, 0xc00f381720, 0x75f71a0, 0xc009ce8a38, 0xc0095a3400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.036449  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.682255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.036680  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0320 07:29:22.055843  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.070677ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.057430  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.154599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.076496  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.708228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.076719  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0320 07:29:22.096183  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.287913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.098463  105913 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.465222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.116287  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.116491  105913 wrap.go:47] GET /healthz: (1.183884ms) 500
goroutine 30292 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0021315e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0021315e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c0db00, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00e411f28, 0xc002f04500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b000)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00e411f28, 0xc00963b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0069b9bc0, 0xc00f381720, 0x75f71a0, 0xc00e411f28, 0xc00963b000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:22.116823  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.9952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.117104  105913 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0320 07:29:22.124111  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.124297  105913 wrap.go:47] GET /healthz: (911.667µs) 500
goroutine 30332 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00210fab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00210fab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c98840, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce9028, 0xc00367f400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687100)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce9028, 0xc009687100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc000045e60, 0xc00f381720, 0x75f71a0, 0xc009ce9028, 0xc009687100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.135855  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.037176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.137479  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.193813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.156748  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.985029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.156970  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0320 07:29:22.175972  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.195901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.178286  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.830801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.196535  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.748155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.196744  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0320 07:29:22.215951  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.216056  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.270368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.216135  105913 wrap.go:47] GET /healthz: (869.306µs) 500
goroutine 30339 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0013749a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0013749a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003dce520, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc009ce9170, 0xc0010ff180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0900)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc009ce9170, 0xc0099b0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0069fca20, 0xc00f381720, 0x75f71a0, 0xc009ce9170, 0xc0099b0900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:41000]
I0320 07:29:22.218290  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.571459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.224401  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.224552  105913 wrap.go:47] GET /healthz: (1.222329ms) 500
goroutine 30301 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0007b7030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0007b7030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d83180, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc0038961e8, 0xc00367fa40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f200)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc0038961e8, 0xc00988f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006a76540, 0xc00f381720, 0x75f71a0, 0xc0038961e8, 0xc00988f200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.236312  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.585105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.236575  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0320 07:29:22.256137  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.252809ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.257942  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.313725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.276688  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.868157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.276907  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0320 07:29:22.295912  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.084872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.297460  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.175082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.316372  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.316539  105913 wrap.go:47] GET /healthz: (1.266519ms) 500
goroutine 30315 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00213ef50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00213ef50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c6a840, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc006d04078, 0xc00c175040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ac00)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc006d04078, 0xc00974ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0067672c0, 0xc00f381720, 0x75f71a0, 0xc006d04078, 0xc00974ac00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40996]
I0320 07:29:22.316606  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.825495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.316837  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0320 07:29:22.324177  105913 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 07:29:22.324334  105913 wrap.go:47] GET /healthz: (964.321µs) 500
goroutine 30285 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002235490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002235490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003da0840, 0x1f4)
net/http.Error(0x7fc60cf7f2a8, 0xc00a601260, 0xc002f05a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
net/http.HandlerFunc.ServeHTTP(0xc00a35dc00, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc00d372040, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc010547260, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4538533, 0xe, 0xc010f7b290, 0xc010547260, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
net/http.HandlerFunc.ServeHTTP(0xc01121d3c0, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
net/http.HandlerFunc.ServeHTTP(0xc011282480, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
net/http.HandlerFunc.ServeHTTP(0xc01121d400, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc60cf7f2a8, 0xc00a601260, 0xc009772600)
net/http.HandlerFunc.ServeHTTP(0xc011009ef0, 0x7fc60cf7f2a8, 0xc00a601260, 0xc009772600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0073c0120, 0xc00f381720, 0x75f71a0, 0xc00a601260, 0xc009772600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.335772  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.014961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.337428  105913 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.199321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.358805  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.330975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.359035  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0320 07:29:22.375810  105913 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.02378ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.377443  105913 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.193505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.396880  105913 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.05472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.397145  105913 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0320 07:29:22.416452  105913 wrap.go:47] GET /healthz: (1.061909ms) 200 [Go-http-client/1.1 127.0.0.1:41000]
W0320 07:29:22.417256  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417321  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417349  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417364  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417377  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417404  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417419  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417431  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417448  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 07:29:22.417466  105913 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0320 07:29:22.417528  105913 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0320 07:29:22.417546  105913 factory.go:412] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0320 07:29:22.417747  105913 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0320 07:29:22.418001  105913 reflector.go:123] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0320 07:29:22.418026  105913 reflector.go:161] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0320 07:29:22.419065  105913 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (688.792µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0320 07:29:22.419813  105913 get.go:251] Starting watch for /api/v1/pods, rv=22251 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m35s
I0320 07:29:22.424258  105913 wrap.go:47] GET /healthz: (912.085µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.425582  105913 wrap.go:47] GET /api/v1/namespaces/default: (1.007019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.427438  105913 wrap.go:47] POST /api/v1/namespaces: (1.505203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.428690  105913 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (882.381µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.432045  105913 wrap.go:47] POST /api/v1/namespaces/default/services: (2.96863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.433403  105913 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (913.729µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.435272  105913 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.42515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.517981  105913 shared_informer.go:123] caches populated
I0320 07:29:22.518018  105913 controller_utils.go:1034] Caches are synced for scheduler controller
I0320 07:29:22.518415  105913 reflector.go:123] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518444  105913 reflector.go:161] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518449  105913 reflector.go:123] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518471  105913 reflector.go:161] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518484  105913 reflector.go:123] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518517  105913 reflector.go:161] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518522  105913 reflector.go:123] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518551  105913 reflector.go:161] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518747  105913 reflector.go:123] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518769  105913 reflector.go:161] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518823  105913 reflector.go:123] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518844  105913 reflector.go:161] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518963  105913 reflector.go:123] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.518987  105913 reflector.go:161] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.519045  105913 reflector.go:123] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.519065  105913 reflector.go:161] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.519193  105913 reflector.go:123] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.519227  105913 reflector.go:161] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0320 07:29:22.520022  105913 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (391.733µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41194]
I0320 07:29:22.520066  105913 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (551.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40996]
I0320 07:29:22.520068  105913 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (466.784µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41208]
I0320 07:29:22.520173  105913 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (385.337µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41202]
I0320 07:29:22.520462  105913 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (318.797µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41192]
I0320 07:29:22.520489  105913 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (334.215µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41206]
I0320 07:29:22.520582  105913 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (350.576µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41200]
I0320 07:29:22.520756  105913 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=22251 labels= fields= timeout=9m52s
I0320 07:29:22.520883  105913 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (329.837µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41198]
I0320 07:29:22.520984  105913 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=22251 labels= fields= timeout=9m49s
I0320 07:29:22.521207  105913 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (1.605713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41204]
I0320 07:29:22.521480  105913 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=22251 labels= fields= timeout=7m11s
I0320 07:29:22.521527  105913 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=22251 labels= fields= timeout=6m15s
I0320 07:29:22.521207  105913 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=22251 labels= fields= timeout=5m9s
I0320 07:29:22.521810  105913 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=22251 labels= fields= timeout=8m16s
I0320 07:29:22.521858  105913 get.go:251] Starting watch for /api/v1/services, rv=22398 labels= fields= timeout=5m50s
I0320 07:29:22.521899  105913 get.go:251] Starting watch for /api/v1/nodes, rv=22251 labels= fields= timeout=9m17s
I0320 07:29:22.521967  105913 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=22251 labels= fields= timeout=8m15s
I0320 07:29:22.618367  105913 shared_informer.go:123] caches populated
I0320 07:29:22.718571  105913 shared_informer.go:123] caches populated
I0320 07:29:22.819578  105913 shared_informer.go:123] caches populated
I0320 07:29:22.919849  105913 shared_informer.go:123] caches populated
I0320 07:29:23.020122  105913 shared_informer.go:123] caches populated
I0320 07:29:23.120339  105913 shared_informer.go:123] caches populated
I0320 07:29:23.220496  105913 shared_informer.go:123] caches populated
I0320 07:29:23.320732  105913 shared_informer.go:123] caches populated
I0320 07:29:23.420933  105913 shared_informer.go:123] caches populated
I0320 07:29:23.520768  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:23.521144  105913 shared_informer.go:123] caches populated
I0320 07:29:23.521177  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:23.521316  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:23.521631  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:23.521740  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:23.524126  105913 wrap.go:47] POST /api/v1/nodes: (2.335093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.526736  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.990654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.527115  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0
I0320 07:29:23.527135  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0
I0320 07:29:23.527268  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0", node "node1"
I0320 07:29:23.527289  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0320 07:29:23.527336  105913 factory.go:733] Attempting to bind rpod-0 to node1
I0320 07:29:23.528964  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.814564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.529674  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1
I0320 07:29:23.529695  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1
I0320 07:29:23.529789  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1", node "node1"
I0320 07:29:23.529802  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0320 07:29:23.529817  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0/binding: (2.032154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.529837  105913 factory.go:733] Attempting to bind rpod-1 to node1
I0320 07:29:23.529931  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:23.531438  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.305457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.531879  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1/binding: (1.834894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.532039  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:23.533301  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.057817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.632041  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (1.918987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.734872  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (2.065164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.735519  105913 preemption_test.go:561] Creating the preemptor pod...
I0320 07:29:23.738267  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.489563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.738493  105913 preemption_test.go:567] Creating additional pods...
I0320 07:29:23.738618  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:23.738630  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:23.738737  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.738780  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.741316  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.154361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.741550  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (1.845925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.741644  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.644663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.741644  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.085414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41248]
I0320 07:29:23.746259  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.118701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.746672  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (4.545038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.747409  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.749602  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (1.473735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.750318  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.266475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.752673  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.875888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.754069  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (4.088987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.754467  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:23.754488  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:23.754607  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.754641  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.755348  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.025649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.756534  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.086895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.757632  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0/status: (2.697502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.757926  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (2.680286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41260]
I0320 07:29:23.758495  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.272157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.759340  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.329277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.760300  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (2.354242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41216]
I0320 07:29:23.760538  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.712044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41218]
I0320 07:29:23.760563  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.760823  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:23.760839  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:23.760910  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.760983  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.762450  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.347674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.762730  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.545292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41260]
I0320 07:29:23.764578  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1/status: (2.973521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41262]
I0320 07:29:23.765424  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.82822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.766517  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.76406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41246]
I0320 07:29:23.766780  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.814605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41262]
I0320 07:29:23.767061  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.767343  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:23.767356  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:23.767447  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.767479  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.771499  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2/status: (3.756689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41260]
I0320 07:29:23.771837  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (3.68487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41278]
I0320 07:29:23.772861  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (5.629472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.773731  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (1.754599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41260]
I0320 07:29:23.773938  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.774176  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:23.774191  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:23.774273  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.774304  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.776424  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.852796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.778542  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (9.93238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41280]
I0320 07:29:23.779541  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (960.663µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41278]
I0320 07:29:23.780414  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3/status: (1.787346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41260]
I0320 07:29:23.780719  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.574828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.781153  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.491544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41280]
I0320 07:29:23.781601  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (892.773µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41260]
I0320 07:29:23.781813  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.782251  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:23.782267  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:23.782345  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.782378  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.784410  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.333237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.785558  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (2.052718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41278]
I0320 07:29:23.786059  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4/status: (2.321521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41280]
I0320 07:29:23.786470  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (5.421439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.788251  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.034637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41278]
I0320 07:29:23.788505  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.788680  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:23.788696  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:23.788796  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.788836  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.790345  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (1.080978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.790533  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.970702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.791241  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (2.269983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41278]
I0320 07:29:23.791462  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.791560  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:23.791568  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:23.791631  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.791665  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.793264  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.271641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.793593  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.175818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.794269  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-0.158d9a2ccefd3d4f: (4.721872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.794311  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5/status: (2.309774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41278]
I0320 07:29:23.795913  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.227259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.796211  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.345994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.796332  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.304565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41264]
I0320 07:29:23.796357  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.796490  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:23.796504  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:23.796575  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.796610  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.798196  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.194158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.798288  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6/status: (1.494539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.798644  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.464717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41308]
I0320 07:29:23.801328  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (2.68424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.801657  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.801723  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (3.228435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41310]
I0320 07:29:23.802114  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:23.802136  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:23.802215  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.802253  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.802350  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.406771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41308]
I0320 07:29:23.803426  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (902.079µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.804615  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7/status: (2.155497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.805466  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.771627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41308]
I0320 07:29:23.805832  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.572409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41314]
I0320 07:29:23.807303  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (2.401053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.807507  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.807766  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:23.807804  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:23.807889  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.807920  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.808758  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.418405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41308]
I0320 07:29:23.809545  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.283989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.809677  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.20593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41322]
I0320 07:29:23.809850  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8/status: (1.673121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41304]
I0320 07:29:23.812562  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (2.343543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41322]
I0320 07:29:23.812787  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.813049  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:23.813062  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:23.813182  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.813220  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.815406  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.790093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.815454  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (6.320353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41308]
I0320 07:29:23.817727  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.957389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41308]
I0320 07:29:23.817748  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.790909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0320 07:29:23.818117  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9/status: (4.489625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41322]
I0320 07:29:23.822652  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.365497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0320 07:29:23.823034  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (4.636367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.823286  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.823425  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:23.823436  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:23.823518  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.823558  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.828155  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (4.192251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41356]
I0320 07:29:23.832469  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10/status: (8.712631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.832789  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (8.842396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41354]
I0320 07:29:23.833504  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (10.520187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0320 07:29:23.834442  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.062325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.834653  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.834790  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:23.834813  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:23.834891  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.834926  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.836409  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.184488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.837592  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.922562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41356]
I0320 07:29:23.840212  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11/status: (3.342597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.841408  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (7.206957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0320 07:29:23.842933  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.693352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.843254  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.843443  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:23.843459  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:23.843553  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.843615  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.851927  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (9.348105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0320 07:29:23.852763  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.263424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.853752  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (2.981581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.853783  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12/status: (3.707685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41356]
I0320 07:29:23.859581  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (4.763757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41356]
I0320 07:29:23.860352  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (7.430226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41332]
I0320 07:29:23.861057  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.863120  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.836493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41306]
I0320 07:29:23.864166  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:23.864185  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:23.864288  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.864336  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.866912  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.385265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41356]
I0320 07:29:23.868120  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13/status: (3.173836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.868510  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (3.532031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41484]
I0320 07:29:23.869207  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.897677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41482]
I0320 07:29:23.872384  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.773957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41356]
I0320 07:29:23.874180  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (4.437373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.875594  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.801327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41482]
I0320 07:29:23.876171  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.877295  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:23.877310  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:23.877415  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.877455  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.878727  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.089211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.882268  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.349493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0320 07:29:23.882426  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.85199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41518]
I0320 07:29:23.882819  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14/status: (3.281943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41484]
I0320 07:29:23.883184  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (3.595425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.886003  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (2.637632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41484]
I0320 07:29:23.886250  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.886956  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.189207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41520]
I0320 07:29:23.887355  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:23.887367  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:23.887459  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.887490  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.896519  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15/status: (8.827932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0320 07:29:23.896565  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (8.297734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41550]
I0320 07:29:23.896944  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (9.600047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.899208  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (2.155244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0320 07:29:23.899518  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.899741  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:23.899796  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:23.899803  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.217012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41414]
I0320 07:29:23.899955  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.900022  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.900041  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (11.465397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41552]
I0320 07:29:23.901849  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.347187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41550]
I0320 07:29:23.901945  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.584057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41552]
I0320 07:29:23.902284  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16/status: (1.579935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41566]
I0320 07:29:23.902515  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.27873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0320 07:29:23.904338  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.594253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41552]
I0320 07:29:23.904642  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.904804  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:23.904817  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:23.904933  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.904995  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.906274  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.786741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0320 07:29:23.907256  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.616427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41568]
I0320 07:29:23.907737  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (2.115715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41550]
I0320 07:29:23.909545  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.661411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41526]
I0320 07:29:23.909763  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17/status: (3.629604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41552]
I0320 07:29:23.911376  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.24941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41568]
I0320 07:29:23.912270  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.912481  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:23.912500  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:23.912573  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.912613  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.914599  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.36553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.918736  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18/status: (5.898464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41568]
I0320 07:29:23.919118  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (5.827613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.919404  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (9.362365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41550]
I0320 07:29:23.921541  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.800052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.921567  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (2.404289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41568]
I0320 07:29:23.921767  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.921894  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:23.921909  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:23.922007  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.922045  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.923688  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.798642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.924365  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.542784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0320 07:29:23.925030  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19/status: (2.257041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.925228  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (2.304899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41574]
I0320 07:29:23.925594  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.48905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.926648  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (927.477µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.926877  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.927015  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:23.927029  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:23.927141  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.927192  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.927637  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.466357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.928985  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20/status: (1.496063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.929413  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.292166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0320 07:29:23.929709  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.467837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41578]
I0320 07:29:23.929710  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.50308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.931918  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (2.553742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.932006  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.792107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41570]
I0320 07:29:23.932137  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.932441  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:23.932463  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:23.932547  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.932583  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.934116  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.002544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41580]
I0320 07:29:23.934269  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.794369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.934978  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.436112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0320 07:29:23.935059  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21/status: (1.930506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0320 07:29:23.936803  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.253048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41576]
I0320 07:29:23.937177  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.114093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41572]
I0320 07:29:23.937178  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.937347  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:23.937359  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:23.937446  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.937477  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.939457  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.424681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0320 07:29:23.939591  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.054649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0320 07:29:23.939892  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (2.304865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41580]
I0320 07:29:23.940096  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22/status: (2.093935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41584]
I0320 07:29:23.941499  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.557179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0320 07:29:23.942588  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (2.071977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41580]
I0320 07:29:23.942892  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.943408  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:23.943437  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:23.943544  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.943591  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.944803  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.014077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0320 07:29:23.946035  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.909014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0320 07:29:23.946799  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23/status: (3.01072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41582]
I0320 07:29:23.948266  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.002807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0320 07:29:23.948569  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.948743  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:23.948761  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:23.948905  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.949250  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.950477  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.235186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0320 07:29:23.951642  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.708878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0320 07:29:23.951866  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.952021  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:23.952037  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:23.952134  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.952176  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.952333  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-10.158d9a2cd318b189: (2.457741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0320 07:29:23.954149  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.427357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0320 07:29:23.954593  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.313157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0320 07:29:23.954907  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24/status: (2.446863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41586]
I0320 07:29:23.956922  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.228064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0320 07:29:23.957196  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.957370  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:23.957384  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:23.957501  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.957544  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.958935  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.053771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0320 07:29:23.959571  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.417693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.959872  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25/status: (2.059044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41592]
I0320 07:29:23.961248  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (969.966µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.961516  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.961664  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:23.961680  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:23.961777  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.961823  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.963415  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (967.693µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.963557  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.136135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0320 07:29:23.963660  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.963813  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:23.963830  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:23.963953  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.963992  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.964783  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-12.158d9a2cd44a9774: (2.219552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0320 07:29:23.965272  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (947.873µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.966381  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26/status: (2.125404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0320 07:29:23.967012  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.150217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0320 07:29:23.968161  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (953.166µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41594]
I0320 07:29:23.968415  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.968562  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:23.968576  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:23.968647  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.968691  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.970339  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.166032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.970723  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.483726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41600]
I0320 07:29:23.970808  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27/status: (1.904709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41598]
I0320 07:29:23.972221  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (985.685µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41600]
I0320 07:29:23.972468  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.972623  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:23.972642  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:23.972774  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.972819  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.974019  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (859.23µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.974546  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28/status: (1.478549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41600]
I0320 07:29:23.974655  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.246875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41602]
I0320 07:29:23.975993  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.047582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41600]
I0320 07:29:23.976285  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.976419  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:23.976434  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:23.976511  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.976566  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.979167  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.954361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0320 07:29:23.979639  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (2.122651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.980008  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29/status: (3.18762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41600]
I0320 07:29:23.983378  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.154645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.983648  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.983807  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:23.983820  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:23.983916  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.983972  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.985824  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.17627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41606]
I0320 07:29:23.985943  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30/status: (1.730811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41596]
I0320 07:29:23.986048  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.724275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0320 07:29:23.987636  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.071601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0320 07:29:23.987901  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.988052  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:23.988067  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:23.988193  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.988235  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.989575  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.04706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41606]
I0320 07:29:23.990136  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.108509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0320 07:29:23.990328  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31/status: (1.681128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41604]
I0320 07:29:23.991861  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.144477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0320 07:29:23.992128  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.992249  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:23.992264  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:23.992329  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.992366  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.994070  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.196849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41610]
I0320 07:29:23.994307  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32/status: (1.722755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0320 07:29:23.994370  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.465913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41606]
I0320 07:29:23.995752  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.0737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0320 07:29:23.995986  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.996152  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:23.996170  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:23.996279  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:23.996326  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:23.997520  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (954.782µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41610]
I0320 07:29:23.998133  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.199882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0320 07:29:23.998257  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33/status: (1.72094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41608]
I0320 07:29:23.999620  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (988.895µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0320 07:29:23.999852  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:23.999988  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:24.000004  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:24.000095  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.000143  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.001438  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.09902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0320 07:29:24.002722  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.386083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.002882  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34/status: (2.282861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41610]
I0320 07:29:24.004463  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.107029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.004771  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.004903  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:24.004919  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:24.005011  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.005057  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.006330  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.08018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.006889  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.390278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I0320 07:29:24.006930  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35/status: (1.66831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0320 07:29:24.008442  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.12445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41612]
I0320 07:29:24.008733  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.008893  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:24.008912  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:24.009022  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.009065  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.010599  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.321965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.010855  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.261784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.011410  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36/status: (2.134175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I0320 07:29:24.013112  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.321696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.013398  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.013570  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:24.013594  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:24.013710  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.013757  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.015602  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.613903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.015614  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37/status: (1.618943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.015931  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.655848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0320 07:29:24.016965  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (960.222µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41614]
I0320 07:29:24.017237  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.017400  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:24.017417  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:24.017508  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.017552  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.018709  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (940.26µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.019556  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38/status: (1.765397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0320 07:29:24.019785  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.705756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I0320 07:29:24.021033  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.093804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I0320 07:29:24.021290  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.021450  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:24.021466  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:24.021558  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.021604  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.023127  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.289287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.023812  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.741394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0320 07:29:24.023983  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39/status: (2.170205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I0320 07:29:24.025490  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.068762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0320 07:29:24.025765  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.025934  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:24.025956  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:24.026059  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.026154  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.029858  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.370645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.029908  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.447205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0320 07:29:24.030365  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.030576  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:24.030607  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:24.030729  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.030766  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.031255  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-18.158d9a2cd867b2b1: (2.230023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41626]
I0320 07:29:24.032241  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.187647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.032686  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40/status: (1.659598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0320 07:29:24.033063  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.298725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41626]
I0320 07:29:24.034253  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.093318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I0320 07:29:24.034583  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.034727  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:24.034741  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:24.034828  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.034875  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.036613  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41/status: (1.484859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41626]
I0320 07:29:24.036617  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.404461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.037230  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.858902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0320 07:29:24.038027  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (962.008µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41626]
I0320 07:29:24.038310  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.038470  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:24.038491  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:24.038623  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.038670  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.039766  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (900.268µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.039979  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.147034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0320 07:29:24.040241  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.040369  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:24.040383  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:24.040464  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.040500  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.041545  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-19.158d9a2cd8f78fe5: (2.186303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41630]
I0320 07:29:24.042573  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.265539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.042744  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42/status: (2.007961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0320 07:29:24.044260  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.304252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41630]
I0320 07:29:24.044589  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.423762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0320 07:29:24.044812  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.044986  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:24.045005  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:24.045021  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.757811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0320 07:29:24.045111  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.045153  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.047277  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.550657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0320 07:29:24.047315  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.65195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I0320 07:29:24.047850  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43/status: (2.146751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41628]
I0320 07:29:24.049435  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.07193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0320 07:29:24.049710  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.049838  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:24.049852  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:24.049935  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.049976  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.051477  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.016252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0320 07:29:24.051800  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.319577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0320 07:29:24.052224  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.052407  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:24.052436  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:24.052542  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.052579  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.053700  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-20.158d9a2cd946230b: (2.730601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.053854  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.020946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0320 07:29:24.054829  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44/status: (2.049601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0320 07:29:24.055368  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.242643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0320 07:29:24.056164  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (972.596µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41634]
I0320 07:29:24.056423  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.056634  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:24.056650  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:24.056754  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.056798  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.058195  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.137225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.058748  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45/status: (1.753244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0320 07:29:24.059416  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.088975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41638]
I0320 07:29:24.060443  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (939.846µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41632]
I0320 07:29:24.060730  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.060868  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:24.060884  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:24.060968  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.061012  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.062257  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (929.596µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.062864  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.29848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0320 07:29:24.063411  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46/status: (2.029283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41638]
I0320 07:29:24.064782  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.02397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0320 07:29:24.065044  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.065189  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:24.065205  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:24.065288  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.065328  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.066683  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.121735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.067337  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47/status: (1.76652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0320 07:29:24.067755  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.918971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41642]
I0320 07:29:24.068876  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (972.483µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41640]
I0320 07:29:24.069156  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.069345  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:24.069364  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:24.069473  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.069513  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.070842  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.015057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.071428  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.263298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0320 07:29:24.071616  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48/status: (1.890648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41642]
I0320 07:29:24.073027  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.111393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0320 07:29:24.073361  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.073510  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:24.073527  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:24.073615  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.073658  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.075435  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.5101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.075963  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.757945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.076249  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49/status: (2.379297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41644]
I0320 07:29:24.078063  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.12766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.078315  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.078467  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:24.078481  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:24.078572  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.078612  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.080140  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.00252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.080362  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.080516  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:24.080534  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:24.080614  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:24.080623  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.045042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41648]
I0320 07:29:24.080655  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:24.081609  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-27.158d9a2cdbbf62c6: (2.361108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.081865  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (965.781µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41648]
I0320 07:29:24.082335  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:24.082718  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.328637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.085378  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-41.158d9a2cdfb141c8: (2.455127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41636]
I0320 07:29:24.148373  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.10687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.248190  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.965601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.348072  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.775726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.448352  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.049374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.520955  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:24.521335  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:24.521465  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:24.521786  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:24.521863  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:24.548155  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.883374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.648319  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.039373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.749284  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.768853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.853057  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (6.609587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:24.962667  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (13.70202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.047729  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.570185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.157989  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (4.737452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.247857  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.62679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.352381  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (6.099863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.419113  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:25.419140  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:25.419325  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod", node "node1"
I0320 07:29:25.419345  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0320 07:29:25.419411  105913 factory.go:733] Attempting to bind preemptor-pod to node1
I0320 07:29:25.419454  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:25.419482  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:25.419619  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.419672  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.421793  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/binding: (2.030648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.421954  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.798001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41814]
I0320 07:29:25.422044  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:25.422242  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.422253  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (2.401402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41648]
I0320 07:29:25.422466  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.422848  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-1.158d9a2ccf5e0154: (2.498825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41816]
I0320 07:29:25.422974  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:25.422985  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:25.423092  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.423128  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.424829  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.302065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.424951  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (1.379683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.425316  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (1.776532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41814]
I0320 07:29:25.425513  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.425515  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.425935  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:25.425964  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:25.426206  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.426250  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.427860  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.440147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41814]
I0320 07:29:25.428047  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.428237  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-2.158d9a2ccfc12240: (2.879742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.428287  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:25.428299  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:25.428363  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.428418  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.429759  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.136968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41814]
I0320 07:29:25.430051  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (3.557294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.430349  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.782858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.430597  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.430635  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.430703  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.430746  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:25.430759  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:25.430832  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.430866  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.431989  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-3.158d9a2cd0294996: (2.962656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.432155  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (1.125185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.432376  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.432409  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (1.235443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.432597  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.432682  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:25.432698  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:25.432765  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.432801  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.434007  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (984.375µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.434052  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.08965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.434324  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.434368  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.434478  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:25.434492  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:25.434553  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.434587  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.435210  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-4.158d9a2cd0a47503: (2.63748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.436268  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.341025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.436430  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.505612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.436714  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.436749  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.436828  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:25.436846  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:25.436924  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.436974  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.437773  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-0.158d9a2ccefd3d4f: (1.967179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.438167  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.038176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.438347  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.438492  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:25.438504  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.331354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.438507  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:25.438575  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.438608  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.438729  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.439938  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.146761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.439944  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.103785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.440248  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.440327  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.440475  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:25.440503  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:25.440566  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.440605  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.440781  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-5.158d9a2cd132260f: (2.357895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.441861  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (996.547µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.442130  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.442362  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.484693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.442603  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.443118  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:25.443136  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:25.443213  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.443246  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.444798  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-6.158d9a2cd17daa00: (3.228517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.445143  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.409168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.445523  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (2.0442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.445740  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.445799  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.445988  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:25.446023  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:25.446134  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.447175  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-7.158d9a2cd1d3c41b: (1.866709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.447188  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.448147  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.706548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.448338  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (844.192µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41646]
I0320 07:29:25.448519  105913 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0320 07:29:25.449640  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (977.55µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41824]
I0320 07:29:25.450283  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-8.158d9a2cd22a3ef1: (2.275948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41822]
I0320 07:29:25.450324  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.450434  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (2.199823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.450618  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.450946  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (960.275µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41824]
I0320 07:29:25.451492  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:25.451510  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:25.451626  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.451702  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.452757  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (1.068888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41824]
I0320 07:29:25.453751  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.266943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.453755  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.138985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41826]
I0320 07:29:25.453952  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.453972  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.454165  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (946.466µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41824]
I0320 07:29:25.454188  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:25.454201  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:25.454281  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.454323  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.455575  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (1.095459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.455615  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (1.094676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41826]
I0320 07:29:25.455708  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.023823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.455838  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.455991  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:25.456009  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:25.456069  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.456128  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.456235  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-9.158d9a2cd27b1aaa: (4.840139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.457492  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (935.4µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.457658  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.614761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.457695  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.457733  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.474954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41818]
I0320 07:29:25.457993  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.458208  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:25.458228  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:25.458305  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.458343  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.458956  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-11.158d9a2cd3c64aec: (2.213438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41830]
I0320 07:29:25.459220  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.25974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.459495  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (983.175µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.459885  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.460002  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:25.460017  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:25.460100  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.460140  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.460785  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.26241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41830]
I0320 07:29:25.460786  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.079711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.460996  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.461282  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.461679  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.133968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41836]
I0320 07:29:25.461884  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.461989  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.716195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41820]
I0320 07:29:25.462240  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.462370  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:25.462419  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:25.462517  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.462566  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.462537  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-13.158d9a2cd5870959: (2.953237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.463467  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (2.102801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.463761  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (949.867µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.463952  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.465047  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (2.006171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41836]
I0320 07:29:25.465344  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-14.158d9a2cd64f34d9: (2.271319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41826]
I0320 07:29:25.465412  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.654294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.465700  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.465814  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:25.465835  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:25.465914  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.465956  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.467417  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.127206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41838]
I0320 07:29:25.467601  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.277011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41840]
I0320 07:29:25.467642  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.467753  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:25.467770  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:25.467830  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.467868  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.467913  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.468049  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.50398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.469169  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-15.158d9a2cd6e85d9c: (3.086984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.469584  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.103642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.469604  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.44796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41838]
I0320 07:29:25.469843  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.469883  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.854236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41840]
I0320 07:29:25.470102  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.471235  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:25.471258  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:25.471327  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.471404  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.470990  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.07663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.473261  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.501086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.473412  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (869.141µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.473488  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.473615  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:25.473628  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:25.473702  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.473734  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.473799  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.662974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.474007  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.475272  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.565586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.475550  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.48531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.475555  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.689246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.475588  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-16.158d9a2cd7a78503: (5.352848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41840]
I0320 07:29:25.475941  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.476246  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:25.476279  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:25.476345  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.476357  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.476424  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.478220  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.195255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41844]
I0320 07:29:25.478458  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.478714  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (2.140734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.478745  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-17.158d9a2cd7f36dea: (2.021994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41834]
I0320 07:29:25.479035  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.478817  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (2.778474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41828]
I0320 07:29:25.479177  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:25.479198  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:25.479275  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.479315  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.481001  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.618761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.481026  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (1.335115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.481275  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.482588  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.205958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.483364  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-21.158d9a2cd9986e0e: (2.735852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41848]
I0320 07:29:25.484288  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.299529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.485864  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-22.158d9a2cd9e31af2: (1.941502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41848]
I0320 07:29:25.485884  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.164931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41842]
I0320 07:29:25.485954  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (6.458332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41844]
I0320 07:29:25.486358  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.486960  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:25.487028  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:25.487126  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.487201  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.487693  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.127243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41848]
I0320 07:29:25.489227  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-23.158d9a2cda406a61: (2.67038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.489640  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.733511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41848]
I0320 07:29:25.489645  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.613171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41852]
I0320 07:29:25.489837  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (2.187509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41850]
I0320 07:29:25.490135  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.490246  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.490485  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:25.490504  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:25.490594  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.490638  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.491478  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.159019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.492482  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.243382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0320 07:29:25.492681  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.492702  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.450461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0320 07:29:25.493314  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:25.493336  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:25.493422  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.493461  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.493467  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.087239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.494762  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-10.158d9a2cd318b189: (4.397647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41854]
I0320 07:29:25.494890  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.209979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.494926  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.040721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41860]
I0320 07:29:25.495152  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.495269  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.547672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41856]
I0320 07:29:25.495469  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.495636  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:25.495664  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:25.495752  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.495798  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.496732  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.103685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41860]
I0320 07:29:25.497536  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-24.158d9a2cdac34e7b: (2.225931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.497749  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.613449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41854]
I0320 07:29:25.498009  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.498068  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.859066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.498143  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:25.498154  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:25.498237  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.498268  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.498631  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.499108  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.514975  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-25.158d9a2cdb1546ed: (16.936199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41854]
I0320 07:29:25.517971  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (20.447543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41860]
I0320 07:29:25.518376  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (19.883173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41846]
I0320 07:29:25.518739  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (20.034092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.518957  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.519128  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.519952  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:25.519973  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:25.520045  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.520104  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.520446  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-12.158d9a2cd44a9774: (4.869092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0320 07:29:25.520922  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.469958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.521279  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:25.521461  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:25.521597  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:25.521989  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:25.522336  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:25.524414  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-26.158d9a2cdb77ad94: (3.225854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.524471  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (3.202503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41858]
I0320 07:29:25.524471  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (3.85432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41888]
I0320 07:29:25.524571  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (4.0298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41880]
I0320 07:29:25.524702  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.524803  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.524918  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:25.525322  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:25.525423  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.525462  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.526624  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.654023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41888]
I0320 07:29:25.527005  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.106508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.527227  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.527367  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.196272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41892]
I0320 07:29:25.527598  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.527650  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:25.527666  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:25.527734  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.527770  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.527895  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (912.658µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41888]
I0320 07:29:25.529031  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-28.158d9a2cdbfe4a7c: (4.05239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.529135  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (913.413µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41888]
I0320 07:29:25.529452  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.511318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.529788  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.530333  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (2.437846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41892]
I0320 07:29:25.530559  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.531196  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.141834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41888]
I0320 07:29:25.531687  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-29.158d9a2cdc37470a: (2.027626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.532497  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:25.532514  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:25.532594  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.532625  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (1.094559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41892]
I0320 07:29:25.532627  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.533748  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (942.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.533951  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.534894  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.769869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41892]
I0320 07:29:25.535167  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.926303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41894]
I0320 07:29:25.535421  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.535589  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:25.535627  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:25.535706  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-30.158d9a2cdca87ccc: (3.440015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.535706  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.535806  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.538064  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (2.508868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.538449  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.87749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41896]
I0320 07:29:25.538657  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.539225  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-31.158d9a2cdce99df2: (3.034313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.539712  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.110981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.541489  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.019414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.541674  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-32.158d9a2cdd28a585: (1.991996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.542738  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.066895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41896]
I0320 07:29:25.542864  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.08381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.543021  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.543192  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:25.543296  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:25.543427  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.543503  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.545049  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-33.158d9a2cdd6512b9: (2.821568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.545168  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.90445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.545489  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.523879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41898]
I0320 07:29:25.545551  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.871707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41896]
I0320 07:29:25.545705  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.545885  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.545993  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:25.546007  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:25.546135  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.546177  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.547008  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.143153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.547943  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-34.158d9a2cdd9f4ed9: (2.291761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.547982  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.350879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.548180  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.548193  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.620759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41898]
I0320 07:29:25.548487  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.548601  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:25.548614  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:25.548678  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.548713  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.548836  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.145458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.549981  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.125917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.550210  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.550296  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.227166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.550351  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:25.550846  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:25.550925  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.550962  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.550480  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.290787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.551059  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-35.158d9a2cddea4421: (2.198818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.551585  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.552552  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.316268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.552682  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.203287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41890]
I0320 07:29:25.552799  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.323649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.552866  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.552966  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:25.552980  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:25.553056  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.553119  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.553382  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.553866  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (896.62µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.554692  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.153264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.554909  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.311268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.554933  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.555357  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.555526  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:25.555544  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:25.555750  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-36.158d9a2cde277191: (4.025517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41904]
I0320 07:29:25.555822  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.276928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.555942  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.555975  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.557357  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.14482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.557567  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.557586  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.027419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41906]
I0320 07:29:25.557833  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:25.557854  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:25.557931  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.558025  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.558760  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-37.158d9a2cde6f0a46: (2.28498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.558976  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (991.127µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.559168  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (942.059µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.559433  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.069811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41910]
I0320 07:29:25.559938  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.561045  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.348232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41900]
I0320 07:29:25.562491  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (987.656µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41910]
I0320 07:29:25.562715  105913 preemption_test.go:598] Cleaning up all pods...
I0320 07:29:25.565774  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.566122  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (9.498621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.566409  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.566493  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:25.566515  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:25.566600  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.566639  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.567517  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (4.670432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41910]
I0320 07:29:25.567971  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.119523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.568229  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.568310  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.539849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.568558  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.568673  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:25.568692  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:25.568782  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.568826  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.569140  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-38.158d9a2cdea8f1e2: (3.90862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.570134  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.10694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.570370  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.363662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.570422  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.570599  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.570735  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:25.570752  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:25.570818  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.570879  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.571765  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-39.158d9a2cdee6c293: (2.086194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.571866  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (4.020013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41910]
I0320 07:29:25.572983  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.487243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.573281  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.573363  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.880356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.573585  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.573721  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:25.573736  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:25.573814  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.573859  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.574419  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-18.158d9a2cd867b2b1: (2.14754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.575457  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.376636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.575674  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.575685  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.522892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.575898  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.576054  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:25.576106  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:25.576188  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.576233  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.576529  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (4.352839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41910]
I0320 07:29:25.577110  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-40.158d9a2cdf72906c: (2.177513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.578614  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.853924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.578635  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.926748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.578887  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.578907  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.579250  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:25.579290  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:25.579371  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.579427  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.579747  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-19.158d9a2cd8f78fe5: (2.038712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.580734  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.081285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.580758  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.128456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.581268  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.581731  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (4.824023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41910]
I0320 07:29:25.581917  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.582430  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:25.582479  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:25.582271  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-42.158d9a2ce0071ce5: (1.886405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.582592  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.582631  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.584109  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.285798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.584153  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.295043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.584352  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.584387  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.584509  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:25.584528  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:25.584594  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.584688  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.585627  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-43.158d9a2ce04e1615: (2.274836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41912]
I0320 07:29:25.586551  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.686175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.587149  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.586899  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.9598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.587291  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:25.587305  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:25.587370  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:25.587491  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:25.588599  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (6.034426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.588802  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.174348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.588810  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.589044  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:25.589317  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.21389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.589721  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:25.590755  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-20.158d9a2cd946230b: (4.112692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41912]
I0320 07:29:25.592150  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:25.592181  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:25.593252  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (4.211553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.594175  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-44.158d9a2ce0bf6ddf: (2.876915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.597480  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:25.600834  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:25.601632  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (7.801398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.601864  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-45.158d9a2ce0ffc937: (6.933591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.608759  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:25.608800  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:25.609033  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-46.158d9a2ce14018b4: (6.596308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.610601  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (8.692546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.620683  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:25.620722  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:25.626954  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-47.158d9a2ce181f2fa: (16.972899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.631852  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (18.147109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.632421  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-48.158d9a2ce1c1d2ee: (2.220566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.634740  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:25.634773  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:25.635312  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-49.158d9a2ce201115f: (2.16719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.636439  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (4.221929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.637960  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-27.158d9a2cdbbf62c6: (1.995768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.639294  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:25.639323  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:25.640458  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-41.158d9a2cdfb141c8: (1.925161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.641185  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (4.361808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.642781  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.907445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.644320  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:25.644344  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:25.644636  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.356501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.645161  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (3.612059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.646771  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.266673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.648580  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.407243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.649381  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (3.667596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.649414  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:25.649438  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:25.650786  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.172761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.651922  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:25.652012  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:25.653777  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.630421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.653778  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (4.10412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41862]
I0320 07:29:25.655468  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.288261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.656868  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:25.656906  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:25.657568  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.315829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.658311  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (4.145653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.659194  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.178289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.660873  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.177793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.660999  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:25.661056  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:25.662035  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (3.390917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.663478  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.756784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.665073  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:25.665130  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:25.666433  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (3.83149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.666626  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.197311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.669198  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:25.669224  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:25.670250  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (3.258414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.670630  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.212491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.678677  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:25.678711  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:25.679713  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (9.227462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.680382  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.463593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.684471  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:25.684846  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:25.685196  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (5.114767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.686778  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.247002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.689740  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:25.689769  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:25.690574  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (4.179258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.691364  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.380352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.693991  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:25.694028  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:25.696160  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (5.224621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.696680  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.321231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.699923  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:25.699963  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:25.700942  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (4.111375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.701380  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.066677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.704112  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:25.704141  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:25.705624  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.239905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.706711  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (5.460724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.709626  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:25.709671  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:25.710818  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (3.780916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.711384  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.504407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.713408  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:25.713490  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:25.715273  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (4.101912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.715866  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.08038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.717961  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:25.717991  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:25.719219  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (3.613088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.719631  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.380386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.721913  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:25.721975  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:25.725307  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.073354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.725901  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (6.334045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.730011  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:25.730041  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:25.730782  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (3.874227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.731512  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.257145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.733502  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:25.733559  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:25.734731  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (3.672953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.734910  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.128127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.737028  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:25.737063  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:25.738251  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (3.299553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.738332  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.057361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.741204  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:25.741238  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:25.742453  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (3.896754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.745203  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.744313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.745246  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:25.745278  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:25.746041  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (3.299095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.747016  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.479708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.748672  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:25.748708  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:25.749672  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (3.345305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.750049  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.15987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.752169  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:25.752203  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:25.753486  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (3.540358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.754116  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.519622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.756228  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:25.756269  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:25.757757  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.161941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.757757  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (3.715447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.760063  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:25.760110  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:25.761113  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (3.069612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.761578  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.160606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.764179  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:25.764233  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:25.765577  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (4.146294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.766054  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.536358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.768898  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:25.768928  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:25.770152  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (3.710893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.770540  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.340998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.773053  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:25.773108  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:25.774111  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (3.289766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.774567  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.232434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.776588  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:25.776642  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:25.777783  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (3.426717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.778196  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.301254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.780292  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:25.780324  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:25.781323  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (3.25021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.781638  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.094047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.784332  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:25.784362  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:25.785249  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (3.375816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.785982  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.371564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.788769  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:25.788800  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:25.790128  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (4.61281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.790481  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.489736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.792882  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:25.792917  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:25.794448  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (4.094726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.794907  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.214902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.797180  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:25.797219  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:25.798708  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (3.914149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.798924  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.411072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.801287  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:25.801322  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:25.802784  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (3.762958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.803479  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.936249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.805748  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:25.805787  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:25.806695  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (3.630736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.807504  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.42504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.809586  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:25.809626  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:25.810451  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (3.486925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.811447  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.483293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.813281  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:25.813311  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:25.814431  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (3.621763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.814885  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.308991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.818904  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (3.999839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.820040  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (823.773µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.825065  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (4.620613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.827497  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (971.482µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.829772  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (788.972µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.832121  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (846.843µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.834492  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (769.801µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.836846  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (913.451µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.839404  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (935.606µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.842023  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.152341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.845067  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.391388ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.851574  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.096219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.855164  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (888.218µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.857570  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (881.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.859841  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (786.45µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.862349  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (745.949µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.865380  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (920.719µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.867772  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (832.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.870056  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (766.614µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.872478  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (893.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.876674  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (764.522µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.879335  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (797.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.881883  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (810.327µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.884069  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (731.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.887200  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (943.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.889593  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (921.951µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.892062  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (914.992µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.894469  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (878.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.896933  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (977.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.899192  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (769.919µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.901534  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (803.174µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.904017  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (996.617µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.906354  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (814.773µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.909262  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (877.782µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.911572  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (814.376µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.913972  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (838.971µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.916353  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (830.18µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.918795  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (903.689µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.921364  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.0147ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.924618  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.005006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.926858  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (790.451µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.929306  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (925.213µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.931619  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (795.251µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.933955  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (816.759µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.936372  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (872.675µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.938729  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (838.443µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.941000  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (789.572µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.944119  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.086374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.946440  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (821.432µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.948811  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (930.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.956345  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (2.193054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.958791  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.044921ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.960988  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (663.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.963440  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (849.361µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.965866  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (847.82µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.968314  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (941.714µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.970587  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.833703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.971007  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0
I0320 07:29:25.971033  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0
I0320 07:29:25.971291  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0", node "node1"
I0320 07:29:25.971316  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0320 07:29:25.971371  105913 factory.go:733] Attempting to bind rpod-0 to node1
I0320 07:29:25.972639  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.738187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.972902  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1
I0320 07:29:25.972926  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1
I0320 07:29:25.973010  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1", node "node1"
I0320 07:29:25.973031  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0320 07:29:25.973067  105913 factory.go:733] Attempting to bind rpod-1 to node1
I0320 07:29:25.973384  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0/binding: (1.710168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.973590  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:25.975449  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1/binding: (2.12799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:25.975640  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:25.975778  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.917088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:25.977632  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.373953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.075129  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (1.776263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.177849  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (1.943392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.178151  105913 preemption_test.go:561] Creating the preemptor pod...
I0320 07:29:26.180547  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.150937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.180740  105913 preemption_test.go:567] Creating additional pods...
I0320 07:29:26.180874  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:26.180895  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:26.181013  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.181065  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.183642  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.924627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0320 07:29:26.183780  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.802306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.184143  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (2.416781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41922]
I0320 07:29:26.184211  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.853422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41902]
I0320 07:29:26.186289  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.158399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.186659  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.187423  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.107983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0320 07:29:26.188559  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (1.51481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.190776  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.935032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0320 07:29:26.193636  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.450452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0320 07:29:26.196558  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (7.636689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.196888  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:26.196902  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:26.197012  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod", node "node1"
I0320 07:29:26.197024  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0320 07:29:26.197063  105913 factory.go:733] Attempting to bind preemptor-pod to node1
I0320 07:29:26.197119  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:26.197136  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:26.197262  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.197311  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.199046  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.777805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0320 07:29:26.200235  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.262773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.202018  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/binding: (4.233451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41924]
I0320 07:29:26.202463  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (4.43989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41928]
I0320 07:29:26.202639  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:26.202994  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.929314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41920]
I0320 07:29:26.204437  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.741348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.204934  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0/status: (6.929165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41926]
I0320 07:29:26.206337  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.415735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41928]
I0320 07:29:26.206845  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.622788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.207792  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (2.324811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41926]
I0320 07:29:26.208011  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.208195  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:26.208214  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:26.208332  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.208366  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.209061  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.637186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.210544  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.503212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41930]
I0320 07:29:26.210752  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1/status: (1.586431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41926]
I0320 07:29:26.210834  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.785555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41924]
I0320 07:29:26.211612  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.977483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.212346  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.227822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41926]
I0320 07:29:26.212895  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.213593  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:26.213616  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:26.213709  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.213750  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.214140  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.106082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.214978  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (953.568µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41930]
I0320 07:29:26.216045  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.412942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.216256  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2/status: (2.236883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41926]
I0320 07:29:26.220794  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (4.183548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.221136  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.221424  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:26.221457  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:26.221610  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.221665  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.222005  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (7.430083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.226230  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.746661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41930]
I0320 07:29:26.226847  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3/status: (3.541779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.228460  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.173857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.228974  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.229471  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.915157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41930]
I0320 07:29:26.229854  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:26.229873  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:26.230032  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.230099  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.232489  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.338237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.232801  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.994822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41934]
I0320 07:29:26.233510  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4/status: (2.702282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.234953  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.155382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.235214  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.235325  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:26.235376  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:26.235492  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.235720  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.236102  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (4.422174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.236812  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.358372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.248059  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (11.983774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41934]
I0320 07:29:26.248443  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5/status: (12.304352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.248888  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.620321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.249745  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (13.14142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.250257  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.299397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41908]
I0320 07:29:26.250601  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.250759  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:26.250776  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:26.250862  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.250901  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.251323  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.788577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.252889  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (1.647177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41934]
I0320 07:29:26.252987  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (1.725569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.253626  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.253669  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.893818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41932]
I0320 07:29:26.253899  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:26.253918  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:26.253987  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.254030  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.254722  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-0.158d9a2d60954707: (3.174175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.256668  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.820014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.257647  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (2.304229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.257682  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.064023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.258853  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6/status: (4.33861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41934]
I0320 07:29:26.261489  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.012028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.263785  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (4.237395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.264145  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.264362  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:26.264379  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:26.264465  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.264505  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.268688  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (6.800709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.269958  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7/status: (5.233072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.269982  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (4.824797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.271508  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.149486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.271730  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.271940  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:26.271962  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:26.272035  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.272073  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.272157  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.556867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.273908  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.58349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.273990  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.664323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.274265  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.274447  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:26.274474  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:26.274580  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.274629  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.275539  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.88425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.276852  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.950245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.277413  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8/status: (2.49112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.278261  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.40212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.278865  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (13.703197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.279505  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.303256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.280072  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.280211  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:26.280228  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:26.280288  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.280339  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.281032  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.025149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.282128  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.216946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.282155  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9/status: (1.578261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.282627  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-1.158d9a2d613e20c5: (3.063888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.283926  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.365136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.284660  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.631459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.285439  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.285551  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:26.285562  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:26.285618  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.285659  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.287513  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.215403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.289306  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (5.852785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.289693  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (3.788251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.290070  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10/status: (4.1723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.291419  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.521299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.292904  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.524678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41936]
I0320 07:29:26.294470  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (3.083112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.295279  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.295442  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:26.295462  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:26.295552  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.295609  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.295567  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.2526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.299261  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.864012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.299428  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (2.957167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.299628  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (3.600654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.300160  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.300449  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:26.300469  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:26.300500  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-3.158d9a2d62090e0b: (3.376778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41944]
I0320 07:29:26.300576  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.300632  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.301669  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.901041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.303347  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.222289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41946]
I0320 07:29:26.303533  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (2.073433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.303827  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11/status: (3.031183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41944]
I0320 07:29:26.304575  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.789801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41942]
I0320 07:29:26.306629  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (2.402954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.306959  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.307164  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:26.307195  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:26.307270  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.307316  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.309860  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.860857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.310278  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.002179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41948]
I0320 07:29:26.311420  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (6.272297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41946]
I0320 07:29:26.312437  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12/status: (4.421115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.313947  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.142694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41948]
I0320 07:29:26.315493  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (2.446568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.316220  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.316512  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.241325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41948]
I0320 07:29:26.316515  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:26.316582  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:26.316638  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.316673  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.318753  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.484147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.318840  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.676794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41940]
I0320 07:29:26.318936  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.319432  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.534857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41952]
I0320 07:29:26.319638  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-5.158d9a2d62dcb346: (1.938431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41948]
I0320 07:29:26.319825  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:26.319845  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:26.319925  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.319996  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.322184  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13/status: (1.894443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41948]
I0320 07:29:26.325469  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (4.854658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41954]
I0320 07:29:26.326145  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (6.328304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.326294  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (3.752789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41948]
I0320 07:29:26.326558  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.326616  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (6.268033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41950]
I0320 07:29:26.326667  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:26.326677  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:26.326738  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.326772  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.328142  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.177108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41954]
I0320 07:29:26.328579  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14/status: (1.545478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41950]
I0320 07:29:26.328636  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.390487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41956]
I0320 07:29:26.329059  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.518515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.330618  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.477542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41956]
I0320 07:29:26.330814  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.331280  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.788253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41938]
I0320 07:29:26.331304  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:26.331326  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:26.331411  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.331459  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.333548  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15/status: (1.822506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41954]
I0320 07:29:26.334004  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.369264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41956]
I0320 07:29:26.334360  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (2.214103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.336008  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.419937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41956]
I0320 07:29:26.336305  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.237249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41960]
I0320 07:29:26.336386  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (2.397449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41954]
I0320 07:29:26.336744  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.336850  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:26.336870  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:26.336929  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.336968  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.339643  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (2.250944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.340232  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16/status: (2.814572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41960]
I0320 07:29:26.340679  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.881507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41956]
I0320 07:29:26.341303  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.965173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41962]
I0320 07:29:26.344599  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (3.767773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41960]
I0320 07:29:26.344873  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.790488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41956]
I0320 07:29:26.344919  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.345042  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:26.345067  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:26.345204  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.345256  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.347587  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.549686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.348842  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (2.901364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41964]
I0320 07:29:26.349302  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17/status: (3.40188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.349766  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.457737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41962]
I0320 07:29:26.352148  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.135274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.352368  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.381912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41962]
I0320 07:29:26.352383  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.352820  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:26.352836  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:26.352941  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.352977  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.355542  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.714497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.356262  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (2.834269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41968]
I0320 07:29:26.356837  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18/status: (3.455049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.356904  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.248436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41970]
I0320 07:29:26.358622  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.42679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.358634  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.677519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.360130  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.360265  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:26.360296  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:26.360385  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.360481  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.360843  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.856877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.361797  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.148554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.362639  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.399465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41972]
I0320 07:29:26.363333  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19/status: (2.45218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41968]
I0320 07:29:26.363345  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.088281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41958]
I0320 07:29:26.365214  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.408871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.365575  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.874381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41972]
I0320 07:29:26.365787  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.366022  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:26.366035  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:26.366146  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.366178  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.368021  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.382281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.368416  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.873134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41974]
I0320 07:29:26.368777  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20/status: (2.349232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41972]
I0320 07:29:26.370314  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.635086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41976]
I0320 07:29:26.370742  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.391897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41972]
I0320 07:29:26.371073  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.657925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41966]
I0320 07:29:26.371287  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.371451  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:26.371466  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:26.371530  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.371573  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.373266  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.712168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41976]
I0320 07:29:26.373629  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.55688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41980]
I0320 07:29:26.373858  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.72198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41978]
I0320 07:29:26.374441  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21/status: (2.355017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41974]
I0320 07:29:26.375829  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (977.201µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41980]
I0320 07:29:26.376054  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.376183  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:26.376195  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:26.376250  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.376283  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.378045  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.285056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41976]
I0320 07:29:26.378064  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.238945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.378254  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22/status: (1.488399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41980]
I0320 07:29:26.379568  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (967.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.379824  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.379980  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:26.380017  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:26.380122  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.380164  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.381362  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (973.396µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.382225  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23/status: (1.839899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41976]
I0320 07:29:26.383146  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.346321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41984]
I0320 07:29:26.384027  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.246831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41976]
I0320 07:29:26.384275  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.384420  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:26.384437  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:26.384537  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.384604  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.386385  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.342329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I0320 07:29:26.386474  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.650632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.387159  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24/status: (2.327544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41984]
I0320 07:29:26.388555  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.023123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.388807  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.388933  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:26.388947  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:26.389003  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.389044  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.390800  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.258973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0320 07:29:26.391678  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25/status: (2.386776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I0320 07:29:26.392618  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (2.655795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.393269  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.210869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I0320 07:29:26.393495  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.393609  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:26.393621  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:26.393697  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.393745  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.395054  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (1.082926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0320 07:29:26.396814  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.557009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0320 07:29:26.397223  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26/status: (3.286801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I0320 07:29:26.398936  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (1.313408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0320 07:29:26.399188  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.399299  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:26.399317  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:26.399421  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.399457  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.400850  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (987.073µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0320 07:29:26.401579  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27/status: (1.704081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0320 07:29:26.401871  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.364297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41992]
I0320 07:29:26.403355  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.315761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0320 07:29:26.403565  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.403864  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:26.403878  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:26.403941  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.403983  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.405776  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.524045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0320 07:29:26.405999  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.406336  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:26.406353  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:26.406459  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.406502  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.407145  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-11.158d9a2d66be0484: (2.466739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0320 07:29:26.408991  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.463259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0320 07:29:26.409307  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (2.313518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I0320 07:29:26.409732  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28/status: (2.898405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I0320 07:29:26.410326  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (6.049652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0320 07:29:26.411246  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.005248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I0320 07:29:26.411568  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.411659  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:26.411672  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:26.411746  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.411798  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.413109  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.090054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0320 07:29:26.414326  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29/status: (1.876429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0320 07:29:26.415406  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.084098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.416203  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.300057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I0320 07:29:26.416452  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.416576  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:26.416589  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:26.416656  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.416695  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.417827  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (951.871µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.418801  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.260401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0320 07:29:26.419421  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30/status: (2.51768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I0320 07:29:26.420987  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.011854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0320 07:29:26.421247  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.421376  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:26.421417  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:26.421488  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.421526  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.422743  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.002317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.423495  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.518768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.424566  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31/status: (2.823865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I0320 07:29:26.426163  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.013335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.426383  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.426568  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:26.426583  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:26.426656  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.426706  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.428051  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.120566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.428522  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32/status: (1.603416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.428775  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.531042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42004]
I0320 07:29:26.430143  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.127117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.430429  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.430557  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:26.430571  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:26.430658  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.430703  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.431966  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.109628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.432621  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.432769  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:26.432792  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:26.432889  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.432926  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.433134  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (2.2103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.433769  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-13.158d9a2d67e4e9cb: (2.344883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42006]
I0320 07:29:26.434840  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (1.354128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.435356  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33/status: (2.024403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.436850  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (1.084996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.437210  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.437369  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:26.437385  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:26.437480  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.437520  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.440574  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34/status: (2.716645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.442802  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.276352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.443243  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (5.403338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.443709  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.443899  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:26.443923  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:26.444068  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.444172  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.446517  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35/status: (1.984572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.446787  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (12.15667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42006]
I0320 07:29:26.446817  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.261366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.448560  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.340028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42002]
I0320 07:29:26.448833  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.862999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.449308  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.449542  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:26.449575  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:26.449727  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.449788  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.451224  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.079331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42006]
I0320 07:29:26.451886  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.385382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.452193  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36/status: (2.124964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I0320 07:29:26.453867  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.308999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.454583  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.880181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42006]
I0320 07:29:26.455350  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.455548  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:26.455605  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:26.455787  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.455841  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.458066  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.55028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.458340  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.593293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0320 07:29:26.458535  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37/status: (1.983066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42006]
I0320 07:29:26.460136  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.151452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0320 07:29:26.460491  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.460668  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:26.460686  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:26.460783  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.460839  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.463018  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.912094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.463489  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.032319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42012]
I0320 07:29:26.465501  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38/status: (4.384219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42010]
I0320 07:29:26.467349  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.426769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42012]
I0320 07:29:26.467629  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.467819  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:26.467849  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:26.467948  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.468000  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.469554  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.149552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.470234  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.58309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0320 07:29:26.471136  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39/status: (1.922168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42012]
I0320 07:29:26.486226  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (10.862997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0320 07:29:26.492047  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (10.589468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.494680  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.495621  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:26.495642  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:26.495947  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.496008  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.513488  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (17.156929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.513954  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.520020  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (22.692763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0320 07:29:26.520635  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:26.520659  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:26.521026  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.521112  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.522504  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:26.523832  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-17.158d9a2d6966e522: (9.323639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42016]
I0320 07:29:26.525921  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (2.383185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.528005  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40/status: (4.020349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0320 07:29:26.528606  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.96966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42016]
I0320 07:29:26.533218  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:26.536690  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (5.474501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0320 07:29:26.537378  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.537543  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:26.537844  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:26.537859  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:26.537994  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.538050  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.539057  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:26.539477  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:26.540022  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.474076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.540584  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.718726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42018]
I0320 07:29:26.554178  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41/status: (15.581794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42014]
I0320 07:29:26.556028  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.275953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42018]
I0320 07:29:26.556405  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.565195  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:26.565235  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:26.565377  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.565814  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.569261  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42/status: (2.714222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42018]
I0320 07:29:26.569893  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.081926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.570194  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (2.427135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.571338  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.430661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42018]
I0320 07:29:26.571723  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.571972  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:26.571990  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:26.572165  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.572221  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.574198  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.435344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.574666  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.535113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42022]
I0320 07:29:26.575878  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43/status: (3.226889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42008]
I0320 07:29:26.601364  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (24.998428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42022]
I0320 07:29:26.601615  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.601723  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:26.601734  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:26.601794  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.601828  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.611005  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (8.287842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.611726  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44/status: (9.69759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42022]
I0320 07:29:26.612364  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (9.87305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42024]
I0320 07:29:26.614064  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.33127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42022]
I0320 07:29:26.614430  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.614591  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:26.614617  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:26.614719  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.614779  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.619238  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.014987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.621305  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (5.057262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42024]
I0320 07:29:26.622694  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45/status: (4.952347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42026]
I0320 07:29:26.624587  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.115963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42024]
I0320 07:29:26.624944  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.625066  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:26.625096  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:26.625164  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.625205  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.627778  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (2.036578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.628027  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.277415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0320 07:29:26.628570  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46/status: (3.146542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42024]
I0320 07:29:26.629949  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (945.821µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0320 07:29:26.630193  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.630324  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:26.630354  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:26.630446  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.630498  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.636786  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (6.455488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0320 07:29:26.637158  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (6.1626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42030]
I0320 07:29:26.637517  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (6.566589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.637566  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47/status: (6.655236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.637924  105913 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0320 07:29:26.639121  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (975.34µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0320 07:29:26.639737  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.736718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.640403  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.640535  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:26.640553  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:26.640708  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.640753  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.642763  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.265599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.652561  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (11.304361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.653786  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (14.277752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42028]
I0320 07:29:26.655998  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48/status: (14.765966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.656720  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (2.453991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.658274  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.257344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.658545  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.658694  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:26.658705  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:26.658798  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.658839  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.659698  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (2.312084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.666643  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (6.284544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.667444  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49/status: (7.386264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42020]
I0320 07:29:26.667807  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (8.014918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.668210  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (8.564236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0320 07:29:26.671528  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.473195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.672599  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (2.245699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.672818  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.672952  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:26.672963  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:26.673027  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.673061  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.675956  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (2.646091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0320 07:29:26.676348  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (4.501698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.676746  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (3.184486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.679449  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-21.158d9a2d6af858a9: (5.63135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42038]
I0320 07:29:26.681808  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (4.085967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42032]
I0320 07:29:26.685423  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.685549  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:26.685562  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:26.685630  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.685667  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.688885  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (2.432641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.689051  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (2.633516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0320 07:29:26.689110  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.689437  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (4.488688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42038]
I0320 07:29:26.689518  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:26.689530  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:26.689599  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.689629  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.691104  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.268416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.691363  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.094474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42042]
I0320 07:29:26.691453  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.537055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42036]
I0320 07:29:26.691903  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.692028  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:26.692054  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:26.692150  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.692191  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.693528  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-25.158d9a2d6c030d7b: (6.814496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.693881  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.566313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42042]
I0320 07:29:26.694303  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (2.076447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42034]
I0320 07:29:26.694512  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.694713  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.392085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.695033  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:26.695055  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:26.695141  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.695182  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.697328  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.620716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.698470  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (3.078689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.698662  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (3.796886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42042]
I0320 07:29:26.698747  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.698861  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:26.698884  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:26.698958  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:26.699006  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:26.699793  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-40.158d9a2d73e20fa0: (2.125312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.700311  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.141343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.700893  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:26.704455  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (3.899858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.704825  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (5.259705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42048]
I0320 07:29:26.705512  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-42.158d9a2d768bb436: (5.158963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.706479  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.338521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.708231  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.440633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.709267  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-46.158d9a2d7a1692f1: (2.725723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.710054  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (1.049456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.729784  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-48.158d9a2d7b03ccc1: (19.146647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.729808  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (18.338773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42044]
I0320 07:29:26.731387  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.139269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.733016  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.110201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.734408  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.020971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.735985  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.267129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.737507  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.06513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.738876  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.036222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.740297  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.036542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.742385  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.040848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.745634  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.094367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.747120  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (1.084749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.748557  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.072396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.749893  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.021807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.751273  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.026031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.752646  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.002766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.753972  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (997.938µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.756569  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (2.118261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.762794  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (5.814621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.764875  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.00125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.773110  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.175912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.774930  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.127175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.776635  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.196341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.778511  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.430092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.780660  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (930.619µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.781933  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (962.615µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.785627  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (3.001105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.787013  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.031931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.788636  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.266578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.790013  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (999.711µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.791499  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.146231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.792925  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (994.105µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.794335  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.029201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.796733  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.493463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.798233  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.115334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.798451  105913 preemption_test.go:598] Cleaning up all pods...
I0320 07:29:26.801310  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:26.801347  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:26.802744  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (4.135603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.806850  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:26.806884  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:26.809895  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (5.816724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.810241  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (5.692203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.812756  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.873988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.813290  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:26.813319  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:26.814450  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (3.806521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.814921  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.314881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.817721  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:26.817799  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:26.818463  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (3.724827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.819745  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.486736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.822362  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (3.673344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.825729  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:26.825765  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:26.827039  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:26.827097  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:26.827672  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.447944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.828743  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (6.07974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.831520  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.66885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.832046  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:26.832097  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:26.832643  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (3.654292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.833802  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.400267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.835600  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:26.835631  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:26.836903  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (3.995054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.837473  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.638061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.839972  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:26.839995  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:26.844608  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (4.39193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.844900  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (7.291346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.849499  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:26.849530  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:26.850381  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (3.134879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.850899  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.116354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.853270  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:26.853310  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:26.854623  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.088546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.854890  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (4.264787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.857549  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:26.857573  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:26.859042  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (3.785589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.859439  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.647941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.862122  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:26.862152  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:26.864039  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.520171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.865628  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (6.024933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.868657  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:26.868690  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:26.869898  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (3.659571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.870250  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.28282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.872511  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:26.872539  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:26.873958  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.208573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.875130  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (4.964152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.878028  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:26.878068  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:26.879497  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (4.11475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.879564  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.233039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.884948  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (5.174176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.886266  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:26.886300  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:26.887665  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:26.887698  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:26.888108  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.464545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.889775  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (4.5142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.890057  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.334204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.892927  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:26.892961  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:26.893970  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (3.907534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.897795  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (4.522335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.899033  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:26.899215  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:26.900137  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (4.290676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.901130  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.393936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.906405  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:26.906438  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:26.908022  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (7.54839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.912136  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:26.912170  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:26.912524  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (5.75259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.913960  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (5.575331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.915065  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.988614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.917794  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:26.917823  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:26.919228  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.217489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.920210  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (4.909005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.923261  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:26.923287  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:26.924891  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.417761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.926614  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (5.989538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.929211  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:26.929241  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:26.930886  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.409917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.931191  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (4.296283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.933756  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:26.933787  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:26.935403  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (3.928933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.935660  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.690062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.938201  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:26.938231  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:26.939551  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.131293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.939641  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (3.784759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.942378  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:26.942419  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:26.944222  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (4.253425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.945130  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.46837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.947456  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:26.947486  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:26.949575  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (4.604612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.950216  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.419336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.952978  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:26.953004  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:26.954358  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (4.358868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.957218  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.968792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.957413  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:26.957448  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:26.958794  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (4.091828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.958838  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.033971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.961750  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:26.961787  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:26.964058  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (4.895074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.964627  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.581297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.967152  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:26.967194  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:26.968658  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.23782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.968829  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (4.323327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.971986  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:26.972023  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:26.973685  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.400761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.974230  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (5.100217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.979167  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:26.979202  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:26.980073  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (5.135142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.981406  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.651808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.982703  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:26.982732  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:26.984850  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (4.520163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.985674  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.744424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.987488  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:26.987518  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:26.989273  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.549206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.989732  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (4.569447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.992544  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:26.992601  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:26.994455  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (4.445918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:26.995110  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.248201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:26.999022  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:26.999104  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:27.000487  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.158124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.000811  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (5.604156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.005553  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:27.005582  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:27.008993  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.297461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.009180  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (8.093454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.012363  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:27.012404  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:27.017492  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (8.03117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.018303  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (5.662417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.021029  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:27.021060  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:27.023497  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.181071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.023910  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (5.616264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.027500  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:27.027533  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:27.028732  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (4.467556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.029224  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.464562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.031924  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:27.031968  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:27.033314  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (4.258374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.033683  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.51482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.037862  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:27.037912  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:27.039463  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.31158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.040042  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (5.663768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.052231  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:27.052266  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:27.053530  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (3.977113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.053706  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.174953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.056893  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:27.056927  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:27.057468  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (3.681324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.059924  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.957158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.060333  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:27.060357  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:27.062143  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.406424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.062500  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (4.761435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.067133  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:27.067164  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:27.068561  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (5.773647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.069139  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.355211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.071228  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:27.071256  105913 scheduler.go:449] Skip schedule deleting pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:27.072454  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (3.619993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.072893  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.412356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.076858  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (4.140921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.078047  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (925.924µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.082226  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (3.796664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.090179  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (919.937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.093757  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.218428ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.097445  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (2.237982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.100296  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.302316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.104000  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (2.218597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.107196  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.227766ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.110642  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.986625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.112994  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (857.08µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.115255  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (804.49µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.117678  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (919.838µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.120014  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (809.681µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.123374  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.486477ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.125772  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (919.403µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.128223  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (901.489µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.130705  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (950.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.133318  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (975.997µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.136791  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (899.386µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.140570  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (871.518µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.143021  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (956.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.146648  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.715439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.149257  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.042126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.153973  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (2.364702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.158255  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (922.21µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.160853  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.109855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.163317  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (994.745µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.167279  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (2.449117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.169498  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (821.787µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.171942  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (955.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.175350  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.157385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.177898  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (955.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.180277  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (842.195µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.182783  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (955.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.187387  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (3.07813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.189936  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (914.701µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.192566  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (956.272µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.200614  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (6.313099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.204505  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (2.364078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.206820  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (892.383µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.209475  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (818.906µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.211955  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (950.374µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.215025  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.384745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.217332  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (829.216µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.219654  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (835.593µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.226264  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (5.041025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.228727  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (982.608µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.231108  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (813.653µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.233640  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (995.004µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.236708  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.372764ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.239148  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (811.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.241654  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (990.784µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.244723  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (827.724µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.247014  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (812.55µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.249276  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (812.722µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.251514  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.827012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.251853  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0
I0320 07:29:27.252329  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0
I0320 07:29:27.253874  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0", node "node1"
I0320 07:29:27.253985  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.029196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.254245  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0320 07:29:27.254335  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1
I0320 07:29:27.254400  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1
I0320 07:29:27.254345  105913 factory.go:733] Attempting to bind rpod-0 to node1
I0320 07:29:27.254500  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1", node "node1"
I0320 07:29:27.254517  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0320 07:29:27.254667  105913 factory.go:733] Attempting to bind rpod-1 to node1
I0320 07:29:27.256209  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0/binding: (1.593108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.256615  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:27.257665  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1/binding: (2.804584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.257892  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:27.258314  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.460997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.259975  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.263892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.358411  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-0: (3.324652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.460953  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (1.640744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.461452  105913 preemption_test.go:561] Creating the preemptor pod...
I0320 07:29:27.463772  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.052175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.463980  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:27.463996  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:27.464013  105913 preemption_test.go:567] Creating additional pods...
I0320 07:29:27.464123  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.464165  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.466780  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.259669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.467343  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.455611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42070]
I0320 07:29:27.467369  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (2.546628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.467419  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.834372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42046]
I0320 07:29:27.475927  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (8.120792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.476209  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (8.527717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.476410  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.478721  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (1.981694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42040]
I0320 07:29:27.479141  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.830731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.481755  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.621687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.484736  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.036875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.486211  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/rpod-1: (5.951919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42070]
I0320 07:29:27.487130  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.745818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.489008  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.508017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.489363  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.223144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42070]
I0320 07:29:27.498256  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (8.819375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.500549  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:27.500571  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0
I0320 07:29:27.500691  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0", node "node1"
I0320 07:29:27.500703  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0", node "node1": all PVCs bound and nothing to do
I0320 07:29:27.500748  105913 factory.go:733] Attempting to bind ppod-0 to node1
I0320 07:29:27.501633  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:27.501645  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:27.501727  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.501758  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.515348  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (16.617943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.516797  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0/binding: (2.816961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42070]
I0320 07:29:27.517169  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (2.335285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42074]
I0320 07:29:27.517550  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1/status: (3.493193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42072]
I0320 07:29:27.517664  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:27.519213  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.235436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.519838  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.207364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42072]
I0320 07:29:27.519891  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.10819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42070]
I0320 07:29:27.520159  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.520329  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:27.520343  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:27.520444  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.520481  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.521252  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.444055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.521637  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (983.224µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42072]
I0320 07:29:27.522440  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2/status: (1.73111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.522835  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.223605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.522939  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:27.525034  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.559656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I0320 07:29:27.525336  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (2.579564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.525568  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.525728  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:27.525743  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:27.525825  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.525857  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.527027  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.566671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0320 07:29:27.527696  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.215625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42072]
I0320 07:29:27.527846  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.15975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42080]
I0320 07:29:27.528996  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.543421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42076]
I0320 07:29:27.530852  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3/status: (2.695813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.532377  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.074183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.532535  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.879429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42080]
I0320 07:29:27.533063  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.533245  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:27.533284  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:27.533386  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.533453  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.535014  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:27.536258  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.787595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.536692  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.677228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.536796  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4/status: (3.110439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42072]
I0320 07:29:27.537757  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.710209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42084]
I0320 07:29:27.537885  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:27.539345  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.659198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.540123  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.901541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42084]
I0320 07:29:27.540162  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:27.540204  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:27.540530  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.540725  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:27.540777  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:27.540920  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.540975  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.543048  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.332754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.543583  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.417412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42086]
I0320 07:29:27.544543  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (2.864191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.545052  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5/status: (3.143931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42084]
I0320 07:29:27.546282  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.270582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.548038  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (2.311159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.548718  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.548867  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:27.548886  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:27.548955  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.548997  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.551281  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (4.672556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.551406  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6/status: (1.859322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.552605  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (943.597µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.552801  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.553307  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.684317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.553684  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (4.133084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42086]
I0320 07:29:27.554149  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:27.554168  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:27.554254  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.554295  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.556726  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.07487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.557312  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7/status: (1.815506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42086]
I0320 07:29:27.557573  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (2.360539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.558751  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.673156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.559590  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.420351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42086]
I0320 07:29:27.559841  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.560032  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:27.560091  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:27.560176  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.560214  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.560620  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.392638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.562068  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (11.23416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0320 07:29:27.562417  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.897865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.562447  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.480154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.562770  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8/status: (2.158187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42086]
I0320 07:29:27.565884  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (2.142803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.566410  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.189362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.566873  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.567966  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:27.567986  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:27.568108  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.568144  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.568771  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.569147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0320 07:29:27.569573  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.576917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.571336  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.663772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42090]
I0320 07:29:27.571723  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9/status: (3.226472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.572065  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (2.173745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42088]
I0320 07:29:27.572668  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.469718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.575206  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.518754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42090]
I0320 07:29:27.575223  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (2.802303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42082]
I0320 07:29:27.575481  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.575647  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:27.575666  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:27.575758  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.575793  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.578853  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.088128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.579112  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (2.752828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42092]
I0320 07:29:27.579374  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10/status: (3.065089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42090]
I0320 07:29:27.579825  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (6.821024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.581866  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (2.032717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42092]
I0320 07:29:27.582197  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.582377  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:27.582414  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:27.582502  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.582538  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.584691  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.667235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.585751  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11/status: (1.73129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.585874  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.613487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42092]
I0320 07:29:27.586224  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.30524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.586733  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.670865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I0320 07:29:27.587158  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (998.348µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42092]
I0320 07:29:27.587416  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.587524  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:27.587545  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:27.587623  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.587672  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.588871  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.743176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.590049  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.784269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42098]
I0320 07:29:27.590174  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12/status: (1.970543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42092]
I0320 07:29:27.590640  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (2.462097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.591674  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.098203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42092]
I0320 07:29:27.591894  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.592045  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:27.592067  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:27.592251  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.592293  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.593453  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.680656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.594516  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.436637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42100]
I0320 07:29:27.594728  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (2.199841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.595369  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13/status: (2.834145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42098]
I0320 07:29:27.595826  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.59284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.597011  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.268074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.597246  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.597686  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.445838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42078]
I0320 07:29:27.597930  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:27.597953  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:27.598034  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.598097  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.600696  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.893681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42104]
I0320 07:29:27.600926  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.97157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.601122  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14/status: (2.76524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42100]
I0320 07:29:27.601219  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (2.347675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42102]
I0320 07:29:27.603621  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.118393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42094]
I0320 07:29:27.605474  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.5914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42100]
I0320 07:29:27.605682  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.605787  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:27.605800  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:27.605864  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.605927  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.605996  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.980068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42104]
I0320 07:29:27.609641  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.715004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42108]
I0320 07:29:27.609775  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15/status: (3.30331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42100]
I0320 07:29:27.610246  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.368285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42110]
I0320 07:29:27.610747  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (4.246689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42104]
I0320 07:29:27.611121  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (999.496µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42108]
I0320 07:29:27.611442  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.611604  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:27.611619  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:27.611699  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.611731  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.612032  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.390539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42110]
I0320 07:29:27.613517  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.288756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42106]
I0320 07:29:27.614038  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.400988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.614301  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16/status: (2.052526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42104]
I0320 07:29:27.616011  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.305214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.616434  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.616598  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:27.616620  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:27.616710  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.616751  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.617426  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.736268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42110]
I0320 07:29:27.618528  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.200326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42106]
I0320 07:29:27.619063  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17/status: (1.530413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.619259  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.928148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.620035  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.168534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42110]
I0320 07:29:27.620626  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (973.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.620815  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.620971  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:27.620995  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:27.621095  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.621130  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.622942  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.569917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42106]
I0320 07:29:27.623385  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.792501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.623478  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.810547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42116]
I0320 07:29:27.623550  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18/status: (2.196539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.625776  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.575657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.626302  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.626617  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.843491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.626707  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:27.626717  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:27.626801  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.626837  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.631229  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-6.158d9a2db12680ed: (3.57726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42118]
I0320 07:29:27.633838  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (5.858663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42106]
I0320 07:29:27.634287  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (6.089544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42112]
I0320 07:29:27.635294  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (6.680174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.637501  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.638268  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:27.638289  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:27.638379  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.638440  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.640300  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.276136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42120]
I0320 07:29:27.640952  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (3.308856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.641655  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (2.796565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42118]
I0320 07:29:27.642259  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19/status: (3.572967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42106]
I0320 07:29:27.642996  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.645534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.643858  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.172962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42118]
I0320 07:29:27.644127  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.644959  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.452876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42114]
I0320 07:29:27.645050  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:27.645071  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:27.645226  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.645292  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.647546  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.541203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42122]
I0320 07:29:27.648100  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.723498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42118]
I0320 07:29:27.650002  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20/status: (3.930573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42120]
I0320 07:29:27.654798  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (4.345165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42120]
I0320 07:29:27.654999  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.655346  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (8.841006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42124]
I0320 07:29:27.655540  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:27.655558  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:27.655649  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.655687  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.658040  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (2.183551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42122]
I0320 07:29:27.658603  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-7.158d9a2db1775db8: (2.331244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0320 07:29:27.658874  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (2.790765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42120]
I0320 07:29:27.659095  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.659253  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (2.575205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42118]
I0320 07:29:27.659254  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:27.659275  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:27.659343  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.659376  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.661156  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.17428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0320 07:29:27.662060  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.820036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42122]
I0320 07:29:27.662184  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21/status: (1.727449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42152]
I0320 07:29:27.663850  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.29653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42122]
I0320 07:29:27.664170  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.664341  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:27.664376  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:27.664503  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.664563  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.666620  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.860704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42122]
I0320 07:29:27.667497  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.22042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42156]
I0320 07:29:27.668032  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22/status: (2.263367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0320 07:29:27.669904  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.428963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42154]
I0320 07:29:27.670412  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.672336  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods: (1.518091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42156]
I0320 07:29:27.672604  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:27.672617  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:27.672708  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.672744  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.676583  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (2.91209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42122]
I0320 07:29:27.676975  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23/status: (3.077479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42156]
I0320 07:29:27.677342  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (3.957786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42158]
I0320 07:29:27.678964  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.087685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42156]
I0320 07:29:27.679284  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.679455  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:27.679472  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:27.679555  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.679594  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.679705  105913 cacher.go:647] cacher (*core.Pod): 1 objects queued in incoming channel.
I0320 07:29:27.679742  105913 cacher.go:647] cacher (*core.Pod): 2 objects queued in incoming channel.
I0320 07:29:27.679760  105913 cacher.go:647] cacher (*core.Pod): 3 objects queued in incoming channel.
I0320 07:29:27.679770  105913 cacher.go:647] cacher (*core.Pod): 4 objects queued in incoming channel.
I0320 07:29:27.679784  105913 cacher.go:647] cacher (*core.Pod): 5 objects queued in incoming channel.
I0320 07:29:27.679793  105913 cacher.go:647] cacher (*core.Pod): 6 objects queued in incoming channel.
I0320 07:29:27.679808  105913 cacher.go:647] cacher (*core.Pod): 7 objects queued in incoming channel.
I0320 07:29:27.681812  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.585358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42158]
I0320 07:29:27.682210  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (1.985147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42156]
I0320 07:29:27.682443  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.682574  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:27.682589  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:27.682673  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.682786  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.684624  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24/status: (1.54072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42158]
I0320 07:29:27.684979  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.726124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.685842  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-8.158d9a2db1d1ad2c: (3.11336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42156]
I0320 07:29:27.686518  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.03848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42158]
I0320 07:29:27.686727  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.686855  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:27.686879  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:27.686991  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.687032  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.687842  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.313382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42160]
I0320 07:29:27.688613  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (999.653µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.689729  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25/status: (2.114875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42158]
I0320 07:29:27.690190  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.247612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42160]
I0320 07:29:27.691153  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (1.041329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42158]
I0320 07:29:27.691451  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.691609  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:27.691626  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:27.691725  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.691774  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.693001  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (1.056466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42160]
I0320 07:29:27.693481  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26/status: (1.50063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.696267  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (2.184394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.696282  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (4.051672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42164]
I0320 07:29:27.696498  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.696657  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:27.696672  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:27.696750  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.696792  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.698470  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27/status: (1.447293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.698889  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.410149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42166]
I0320 07:29:27.699228  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.625459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42160]
I0320 07:29:27.700823  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.250788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42166]
I0320 07:29:27.701143  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.701306  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:27.701326  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:27.701418  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.701458  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.703647  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.623274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42168]
I0320 07:29:27.704013  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28/status: (2.333296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42160]
I0320 07:29:27.704956  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (2.975293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.705717  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (985.412µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42160]
I0320 07:29:27.706005  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.706181  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:27.706198  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:27.706271  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.706311  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.707796  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.029466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42168]
I0320 07:29:27.707935  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.464465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.708212  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.708372  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:27.708406  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:27.708507  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.708570  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.709121  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-11.158d9a2db3264a65: (2.112985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42170]
I0320 07:29:27.709934  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.163083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.710636  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29/status: (1.804694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42168]
I0320 07:29:27.711732  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.902871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42170]
I0320 07:29:27.712504  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.26249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42168]
I0320 07:29:27.712843  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.713020  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:27.713059  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:27.713206  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.713265  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.714700  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.183629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42170]
I0320 07:29:27.715956  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.040083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.716637  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30/status: (3.09749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42162]
I0320 07:29:27.718586  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (1.482331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.718906  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.719022  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:27.719036  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:27.719121  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.719158  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.720641  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.347653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.721450  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.279887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42170]
I0320 07:29:27.721602  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.721824  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:27.721873  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:27.721961  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.722021  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.722789  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-12.158d9a2db374a414: (2.903558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42174]
I0320 07:29:27.723786  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.246012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.724445  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31/status: (2.109261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42170]
I0320 07:29:27.725202  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.312168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42174]
I0320 07:29:27.726457  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.409425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42170]
I0320 07:29:27.727027  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.727227  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:27.727247  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:27.727339  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.727381  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.728807  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.203691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.728974  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.143866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42176]
I0320 07:29:27.730311  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32/status: (2.680003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42174]
I0320 07:29:27.732265  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.211859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42176]
I0320 07:29:27.732467  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.732609  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:27.732626  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:27.732746  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.732782  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.734502  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.346253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.735887  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (2.320102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42178]
I0320 07:29:27.736784  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33/status: (1.827696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42176]
I0320 07:29:27.738106  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (987.512µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42178]
I0320 07:29:27.738303  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.738505  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:27.738516  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:27.738591  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.738621  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.740723  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.518167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42180]
I0320 07:29:27.742670  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.500749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.744463  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34/status: (5.475825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42178]
I0320 07:29:27.747284  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (2.400476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.747536  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.747756  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:27.747793  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:27.748029  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.748117  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.752602  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (2.548527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42180]
I0320 07:29:27.753251  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35/status: (3.080653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.753257  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.496364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42182]
I0320 07:29:27.755361  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (1.166404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.755749  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.756020  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:27.756061  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:27.756223  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.756334  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.758938  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.948398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42180]
I0320 07:29:27.759174  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.911724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42184]
I0320 07:29:27.758943  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36/status: (1.923085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.760626  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.017849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.760979  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.761188  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:27.761264  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:27.761380  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.761439  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.763748  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.873761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0320 07:29:27.764263  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37/status: (2.584633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42184]
I0320 07:29:27.765604  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (3.930587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.766325  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.036905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42184]
I0320 07:29:27.766628  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.766828  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:27.766846  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:27.766941  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.766987  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.768854  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (1.125233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42172]
I0320 07:29:27.769237  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (1.549449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0320 07:29:27.769987  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-15.158d9a2db48b34a4: (2.022933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.770465  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.770609  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:27.770658  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:27.770763  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.770847  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.772601  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.262083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42190]
I0320 07:29:27.774044  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.043672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42192]
I0320 07:29:27.774583  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (3.558697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.776382  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38/status: (5.318665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42186]
I0320 07:29:27.777830  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (1.012079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.778051  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.778212  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:27.778229  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:27.778318  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.778351  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.780843  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.60902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42198]
I0320 07:29:27.783904  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39/status: (4.996704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.786868  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (2.311428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.787099  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.787240  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:27.787257  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:27.787335  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.787450  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.788875  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.169962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42198]
I0320 07:29:27.789611  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40/status: (1.925772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.790763  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (11.922675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42192]
I0320 07:29:27.790868  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.446007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42200]
I0320 07:29:27.791110  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (1.089651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42188]
I0320 07:29:27.791361  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.791511  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:27.791530  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:27.791630  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.791674  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.792939  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.070572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42198]
I0320 07:29:27.793522  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.30646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42202]
I0320 07:29:27.794627  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41/status: (2.229653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42192]
I0320 07:29:27.797150  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (2.073126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42202]
I0320 07:29:27.797420  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.797565  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:27.797587  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:27.797674  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.797774  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.799107  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.113094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42198]
I0320 07:29:27.799705  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.270758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.801191  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42/status: (3.220545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42202]
I0320 07:29:27.802713  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.085936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.802956  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.803811  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:27.803841  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:27.803932  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.803971  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.805798  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.485375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.806452  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43/status: (2.180527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42198]
I0320 07:29:27.807029  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.542342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42206]
I0320 07:29:27.808592  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.755228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42198]
I0320 07:29:27.809139  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.809413  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:27.809440  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:27.809521  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.809552  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.811643  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.424673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.811807  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.598738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.812588  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44/status: (2.08977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42206]
I0320 07:29:27.814124  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.175878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.816187  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.816411  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:27.816431  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:27.816513  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.816556  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.817770  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.015534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.818774  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.735753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.821307  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45/status: (2.076306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.822770  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.035001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.822990  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.823260  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:27.823309  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:27.823452  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.823499  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.825124  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.238334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.826691  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46/status: (2.639589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.827134  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.896402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42210]
I0320 07:29:27.828925  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.181616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.829136  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.829288  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:27.829332  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:27.829435  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.829470  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.831236  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.090276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.831888  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47/status: (2.212178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.832304  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (2.220793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42212]
I0320 07:29:27.833494  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.171818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42204]
I0320 07:29:27.833792  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.833949  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:27.833967  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:27.834129  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.834205  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.837247  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48/status: (2.738372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42212]
I0320 07:29:27.837566  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (2.500376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.838843  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.200541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42212]
I0320 07:29:27.839066  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.839273  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:27.839292  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:27.839364  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.839409  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.840744  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.095988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.841111  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49/status: (1.525238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42212]
I0320 07:29:27.842172  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (7.448202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.844368  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.718353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.844572  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (2.352622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42212]
I0320 07:29:27.844810  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.844948  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:27.844967  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:27.845050  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.845127  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.846377  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.109005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.846528  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.192793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.846746  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.846877  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:27.846899  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:27.846971  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.847018  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.847924  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-24.158d9a2db91fd5ca: (2.149548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42216]
I0320 07:29:27.848613  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.029458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.849184  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.599031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.849439  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.849635  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:27.849684  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:27.849803  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.849873  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.851144  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-27.158d9a2db9f5ae5d: (2.324013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42216]
I0320 07:29:27.851520  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.320641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.852027  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.852157  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:27.852178  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:27.852251  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.852289  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.852447  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (2.281712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.853992  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-28.158d9a2dba3ce3c9: (2.058568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42216]
I0320 07:29:27.854047  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.408807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.854164  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.432976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42218]
I0320 07:29:27.854269  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.854506  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:27.854522  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:27.854610  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.854645  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.856362  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.585924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.856647  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.856795  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:27.856812  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:27.856902  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:27.856944  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:27.857123  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (1.968789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42220]
I0320 07:29:27.857186  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-36.158d9a2dbd823283: (2.480612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:27.858448  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.070313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:27.858468  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.353253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42208]
I0320 07:29:27.859052  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:27.859936  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-37.158d9a2dbdd00b78: (2.156139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42220]
I0320 07:29:27.862503  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-39.158d9a2dbed22fa2: (2.01949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:27.875926  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.293308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:27.976590  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.853141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.076543  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.83804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.176361  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.701395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.276630  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.859618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.376432  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.688545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.476530  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.79473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.523121  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:28.535242  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:28.538072  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:28.540317  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:28.540357  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:28.576491  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.746042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.676479  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.738226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.776459  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.715328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.876511  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.758833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:28.976623  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.895908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.078598  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (3.900715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.176452  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.807204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.276708  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.969702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.376453  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.747479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.420579  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:29.420612  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:29.420780  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.420837  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.422777  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.696911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.423022  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.423048  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.404399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:29.423318  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.425189  105913 wrap.go:47] PUT /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/status: (1.701652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:29.427316  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/preemptor-pod.158d9a2dac180ae9: (5.295913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42684]
I0320 07:29:29.430715  105913 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-0: (5.134641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42214]
I0320 07:29:29.430990  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:29.431003  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:29.431140  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.431178  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.432926  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.849284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42684]
I0320 07:29:29.433699  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (2.065516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.433699  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (2.005039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.433892  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.433912  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.434054  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:29.434094  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:29.434199  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.434245  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.439541  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (4.986582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.439865  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (5.332502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.440114  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.440203  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.440409  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:29.440430  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:29.440516  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.440556  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.441928  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.218269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.442181  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.442342  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:29.442360  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:29.442446  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.442455  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.620535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.442485  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.443068  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.444066  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.168888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.444324  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.444498  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:29.444515  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:29.444597  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.444637  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.445059  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.905495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.445321  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.446217  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.326756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.446469  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.446959  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (2.054129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42222]
I0320 07:29:29.447253  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.447409  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:29.447426  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:29.447570  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.447608  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.448924  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.077165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.449230  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.449543  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.756088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.449741  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.449876  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:29.449891  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:29.449963  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.450003  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.451695  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.487931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.451921  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.656202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.451958  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.452125  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:29.452136  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:29.452226  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.452279  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.452301  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.453733  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.228118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.453959  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.454332  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.817789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.454681  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-1.158d9a2dae55b7a8: (20.519792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42684]
I0320 07:29:29.454744  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.455017  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:29.455035  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:29.455119  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.455157  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.456891  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (954.685µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0320 07:29:29.457262  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.960891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.457303  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.457513  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.457640  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:29.457664  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:29.457808  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.457846  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.459095  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.021248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0320 07:29:29.459274  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.276295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.459296  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.459482  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.459587  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:29.459601  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:29.459678  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.459717  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.461102  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.141951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0320 07:29:29.461293  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.461349  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.299663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.461575  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.461773  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:29.461791  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:29.461882  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.462071  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.463902  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-2.158d9a2daf735f23: (8.253344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.463905  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.267971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0320 07:29:29.464155  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.464258  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:29.464271  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:29.464337  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.464374  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.465817  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (996.848µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.466272  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.26696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0320 07:29:29.466502  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.466782  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.466905  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:29.466915  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19
I0320 07:29:29.467006  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.467037  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.467133  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-3.158d9a2dafc571b7: (2.59215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0320 07:29:29.468363  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (6.161521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.468674  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.424659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.468900  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-19: (1.642117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42694]
I0320 07:29:29.468941  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.468951  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.469112  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.469183  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:29.469192  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20
I0320 07:29:29.469279  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.469315  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.470712  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.138411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.470990  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.471215  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-20: (1.74145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.471462  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.471601  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:29.471628  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7
I0320 07:29:29.471711  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.471781  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.472950  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-4.158d9a2db0390b6a: (4.986295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42692]
I0320 07:29:29.473438  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.486463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.473530  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-7: (1.608567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.473778  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.473920  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:29.473940  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21
I0320 07:29:29.474130  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.474045  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.474204  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.475722  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (1.083672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.475951  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.475954  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (890.008µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42698]
I0320 07:29:29.477736  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-21: (3.276793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42690]
I0320 07:29:29.478037  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.478051  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-5.158d9a2db0ac16e4: (2.689662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42700]
I0320 07:29:29.478239  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:29.478303  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22
I0320 07:29:29.478433  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.478472  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.480249  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.500288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42698]
I0320 07:29:29.480482  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.480635  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:29.480682  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23
I0320 07:29:29.480699  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-22: (1.678418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42702]
I0320 07:29:29.480793  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.480837  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.480909  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.482360  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.276716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42702]
I0320 07:29:29.482625  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.482735  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:29.482745  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8
I0320 07:29:29.482798  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-9.158d9a2db24ab028: (4.038442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.482807  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.482843  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.483006  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-23: (1.060991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42698]
I0320 07:29:29.483251  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.485504  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (2.056992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.485595  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-8: (2.476571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42702]
I0320 07:29:29.485785  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.485819  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.486030  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:29.486047  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25
I0320 07:29:29.486262  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.486323  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.487166  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-10.158d9a2db2bf639d: (2.248726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42698]
I0320 07:29:29.489798  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (2.53472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42704]
I0320 07:29:29.489825  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-25: (3.230641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.490009  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.490037  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.490256  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:29.490269  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26
I0320 07:29:29.490338  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-13.158d9a2db3bb29a6: (2.628945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42698]
I0320 07:29:29.490348  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.490380  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.491544  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (998.817µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.491834  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.492643  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-26: (1.495468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42704]
I0320 07:29:29.492905  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.493020  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:29.493036  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11
I0320 07:29:29.493046  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-14.158d9a2db41369a4: (2.19667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42698]
I0320 07:29:29.493127  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.493165  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.494858  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.463572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42704]
I0320 07:29:29.495101  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.495203  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:29.495216  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29
I0320 07:29:29.495280  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.495301  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-11: (1.92529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.495314  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.495548  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.496462  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (1.00647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42704]
I0320 07:29:29.496848  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.497013  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:29.497049  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30
I0320 07:29:29.497166  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.497204  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.500239  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-29: (4.65256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.500470  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.500646  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-16.158d9a2db4e3cb0c: (6.798016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42706]
I0320 07:29:29.500666  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (3.087322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42708]
I0320 07:29:29.500965  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-30: (3.618195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42704]
I0320 07:29:29.500984  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.501333  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.501459  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:29.501475  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12
I0320 07:29:29.501542  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.501578  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.503096  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (1.353851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.503517  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.503635  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:29.503651  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31
I0320 07:29:29.503653  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-17.158d9a2db5305f13: (2.265502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42706]
I0320 07:29:29.503759  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.503797  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.504052  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-12: (2.068011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.505800  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (1.462045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.506001  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.506582  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.507055  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-18.158d9a2db5733229: (1.798339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42706]
I0320 07:29:29.507356  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-31: (2.747027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42686]
I0320 07:29:29.507610  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.507754  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:29.507776  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32
I0320 07:29:29.507861  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.507899  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.509257  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.224017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.509499  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.509624  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:29.509646  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33
I0320 07:29:29.509711  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-32: (1.349239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42714]
I0320 07:29:29.509889  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.509968  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.510338  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.510981  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-6.158d9a2db12680ed: (3.213066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.511594  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (1.098873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42714]
I0320 07:29:29.511753  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-33: (1.522864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.511955  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.512123  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:29.512133  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34
I0320 07:29:29.512227  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.512256  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.513698  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (1.171266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.513972  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.514142  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.523303  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:29.529659  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-34: (15.627008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.530003  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.530144  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:29.530163  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35
I0320 07:29:29.530237  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.530268  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.535234  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (4.494264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.535602  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-35: (5.109503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.535705  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:29.535906  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.536017  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.536175  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:29.536187  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15
I0320 07:29:29.536259  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.536292  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.538222  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:29.540475  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:29.543140  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (6.366715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.543528  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-15: (7.007339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.543735  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.543841  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.544412  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:29.544429  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38
I0320 07:29:29.544521  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.544552  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.549268  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-19.158d9a2db67b49fb: (37.755013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.573133  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:29.585503  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (11.384816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.585627  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (10.920879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.585735  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.585940  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:29.585982  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40
I0320 07:29:29.586114  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.586168  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.586789  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-38: (12.683566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.587160  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.588124  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-20.158d9a2db6e3cd5a: (13.246057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42752]
I0320 07:29:29.588435  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (2.013569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42712]
I0320 07:29:29.588737  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.590377  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-40: (4.028467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.590597  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.590961  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-7.158d9a2db1775db8: (2.145298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42752]
I0320 07:29:29.591975  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:29.591990  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41
I0320 07:29:29.592100  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.592135  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.593733  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (1.117886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.593974  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.594149  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:29.594163  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42
I0320 07:29:29.594238  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.594271  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.595138  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-41: (2.506478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42754]
I0320 07:29:29.595483  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.595908  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-21.158d9a2db7babf09: (4.39577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.596011  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (1.356053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42756]
I0320 07:29:29.596332  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.597017  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-42: (2.562577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.597274  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.597416  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:29.597438  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43
I0320 07:29:29.597526  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.597570  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.602977  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (1.16662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42754]
I0320 07:29:29.603036  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-22.158d9a2db809e1ad: (6.561495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.603230  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.604547  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-43: (2.265295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.604760  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.604877  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:29.604909  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44
I0320 07:29:29.605014  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.605072  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.606470  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.21444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.606733  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.606866  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:29.606895  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45
I0320 07:29:29.606986  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.607036  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.607233  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-44: (1.958319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42754]
I0320 07:29:29.607493  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.608215  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-23.158d9a2db886bb53: (4.496679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.608328  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (1.12582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42716]
I0320 07:29:29.608512  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-45: (993.929µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42754]
I0320 07:29:29.608608  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.608715  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:29.608725  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46
I0320 07:29:29.608767  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.608849  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.608896  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.610618  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.487877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.610842  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-8.158d9a2db1d1ad2c: (2.070851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.610930  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.611117  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:29.611134  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47
I0320 07:29:29.611203  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-46: (1.485232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42760]
I0320 07:29:29.611201  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.611312  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.611456  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.613231  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.558854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0320 07:29:29.613529  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.613940  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-25.158d9a2db960c232: (2.426273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.614527  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-47: (1.585329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.614850  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.615002  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:29.615028  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48
I0320 07:29:29.615133  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.615192  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.616632  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.195857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.616888  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.617014  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:29.617044  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49
I0320 07:29:29.617168  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.617219  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.617384  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-48: (1.769336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0320 07:29:29.619590  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-26.158d9a2db9a91fc0: (4.84232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.619891  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.620512  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.269926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.620715  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.620836  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:29.620846  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24
I0320 07:29:29.620911  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.620941  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.621470  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-49: (1.151386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42764]
I0320 07:29:29.621699  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.622121  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.005395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.622315  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-24: (1.111833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42710]
I0320 07:29:29.622335  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.622466  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:29.622504  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27
I0320 07:29:29.622553  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.622824  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.622876  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.623818  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-11.158d9a2db3264a65: (3.613875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0320 07:29:29.624192  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.071684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.624384  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-27: (1.248728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42764]
I0320 07:29:29.624830  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.624908  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.625039  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:29.625064  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28
I0320 07:29:29.625190  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.625244  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.627517  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.956527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0320 07:29:29.627576  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-28: (1.780263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:29.627782  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.627915  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.628031  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:29.628060  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36
I0320 07:29:29.628158  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-29.158d9a2dbaa96642: (2.961595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.628157  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.628197  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.629812  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.432338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0320 07:29:29.630062  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.630232  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:29.630274  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37
I0320 07:29:29.630384  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.630476  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.631190  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-30.158d9a2dbaf10f2a: (2.363357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.632807  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-36: (1.751503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:29.632935  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (2.098231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.633056  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.633162  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.633638  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-37: (2.926282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42762]
I0320 07:29:29.634796  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.634971  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:29.634988  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39
I0320 07:29:29.635092  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:29.635125  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:29.635261  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-12.158d9a2db374a414: (3.511124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.636811  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (1.330763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:29.637032  105913 backoff_utils.go:79] Backing off 2s
I0320 07:29:29.638350  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-39: (2.830289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.638617  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:29.638743  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-31.158d9a2dbb769eab: (2.414031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42758]
I0320 07:29:29.641385  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-32.158d9a2dbbc869aa: (1.961821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.644660  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-33.158d9a2dbc1adf22: (2.71386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.647553  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-34.158d9a2dbc73f8d7: (2.224115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.650438  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-35.158d9a2dbd04b79a: (2.158375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.653276  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-15.158d9a2db48b34a4: (2.309176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.655979  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-38.158d9a2dbe5f9b21: (2.119487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.658567  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-40.158d9a2dbf5ce728: (2.024688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.661456  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-41.158d9a2dbf9d7a2c: (2.139939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.664183  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-42.158d9a2dbffa853c: (2.159352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.666929  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-43.158d9a2dc0591ae7: (2.150959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.669291  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-44.158d9a2dc0ae482a: (1.833815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.672131  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-45.158d9a2dc1191ea2: (2.216507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.675044  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-46.158d9a2dc183108f: (2.218321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.677182  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.062465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:29.678495  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-47.158d9a2dc1de33cf: (2.905221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.680992  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-48.158d9a2dc225f1f1: (1.916337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.683859  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-49.158d9a2dc275aca4: (2.392412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.686258  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-24.158d9a2db91fd5ca: (1.915532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.692271  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-27.158d9a2db9f5ae5d: (5.482106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.727543  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-28.158d9a2dba3ce3c9: (3.541684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.734374  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-36.158d9a2dbd823283: (6.217787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.738037  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-37.158d9a2dbdd00b78: (3.158672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.741019  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-39.158d9a2dbed22fa2: (2.456821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.776217  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.53635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.876558  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.839664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:29.976705  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.972677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.082449  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.231673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.176606  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.984428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.276577  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.874268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.376758  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.962856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.476782  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.009746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.523495  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:30.535848  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:30.538505  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:30.540629  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:30.573744  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:30.576583  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.863366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.676517  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.807041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.776444  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.724856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.876938  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.098921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:30.976737  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.895063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.076681  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.909456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.176580  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.9129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.276945  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.237958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.376717  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.938188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.476688  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.928606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.523701  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:31.536031  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:31.538659  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:31.540741  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:31.573872  105913 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 07:29:31.576745  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.022162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.676915  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.728289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.776972  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.184495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.876831  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.053535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:31.976782  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (2.027242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.076722  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.929309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.176503  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.861455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.276632  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.866892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.376709  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod: (1.973728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.421982  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:32.422019  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod
I0320 07:29:32.422230  105913 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod", node "node1"
I0320 07:29:32.422252  105913 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0320 07:29:32.422308  105913 factory.go:733] Attempting to bind preemptor-pod to node1
I0320 07:29:32.422630  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:32.422645  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1
I0320 07:29:32.422756  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.422800  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.425408  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/preemptor-pod/binding: (2.670195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.425501  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (1.773387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.425585  105913 scheduler.go:572] pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 07:29:32.425727  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.425825  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-1: (2.063112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43222]
I0320 07:29:32.425891  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:32.425919  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2
I0320 07:29:32.426063  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.426171  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.426356  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.427145  105913 wrap.go:47] GET /api/v1/namespaces/default: (2.021597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43226]
I0320 07:29:32.427781  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-1.158d9a2dae55b7a8: (4.012676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.427982  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (1.366223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.428225  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.428978  105913 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.440475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43226]
I0320 07:29:32.430109  105913 wrap.go:47] POST /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events: (1.389397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.430261  105913 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (921.968µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43226]
I0320 07:29:32.431890  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-2: (1.096304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.432127  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.432351  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:32.432369  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3
I0320 07:29:32.432464  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.432501  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.433206  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-2.158d9a2daf735f23: (2.573635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.433950  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.033759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.434327  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-3: (1.390296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.434370  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.434531  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.434661  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:32.434684  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4
I0320 07:29:32.434750  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.434788  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.436248  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.213177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.436588  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-4: (1.357218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.436839  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.436842  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.437020  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:32.437037  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5
I0320 07:29:32.437148  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.437230  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.438581  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-3.158d9a2dafc571b7: (4.612057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.438903  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.376916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.439304  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.439679  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-5: (1.883313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.439982  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.440296  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:32.440315  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9
I0320 07:29:32.440380  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.440432  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.441914  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.234785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.441984  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-9: (1.284081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.442155  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.442267  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:32.442291  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.442286  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10
I0320 07:29:32.442411  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.442443  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.442597  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-4.158d9a2db0390b6a: (3.464692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.444025  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.419019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.444033  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-10: (1.362802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.444449  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.444491  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.444581  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:32.444646  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13
I0320 07:29:32.445575  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.445639  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.447213  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.059791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.447271  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-13: (1.213391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.447454  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.447534  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.447619  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-5.158d9a2db0ac16e4: (4.405881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.447763  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:32.447787  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14
I0320 07:29:32.447933  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.447977  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.449224  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (1.006327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.449477  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.449599  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:32.449622  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16
I0320 07:29:32.449714  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.449752  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.451248  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-14: (3.014262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.451600  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.451805  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.071296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43228]
I0320 07:29:32.452127  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.452180  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-16: (1.766404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42768]
I0320 07:29:32.452474  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.452542  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-9.158d9a2db24ab028: (2.933526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42766]
I0320 07:29:32.452596  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:32.452613  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17
I0320 07:29:32.452696  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.452736  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.454216  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.2408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.454347  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-17: (1.477092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43228]
I0320 07:29:32.454584  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.454598  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.454706  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:32.454723  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6
I0320 07:29:32.454795  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.454834  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.456335  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.277475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43228]
I0320 07:29:32.456448  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-6: (1.122019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.456648  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.456674  105913 backoff_utils.go:79] Backing off 4s
I0320 07:29:32.456767  105913 scheduling_queue.go:908] About to try and schedule pod preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:32.456782  105913 scheduler.go:453] Attempting to schedule pod: preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18
I0320 07:29:32.456877  105913 factory.go:647] Unable to schedule preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 07:29:32.456909  105913 factory.go:742] Updating pod condition for preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 07:29:32.458375  105913 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/events/ppod-10.158d9a2db2bf639d: (4.414329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 07:29:32.458384  105913 wrap.go:47] GET /api/v1/namespaces/preemption-racee193c89c-4ae1-11e9-8a3c-0242ac110002/pods/ppod-18: (1.262896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 07:29:32.458611  105913 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 07:29:32.458804  10