This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 649 succeeded
Started2019-03-19 15:54
Elapsed29m6s
Revision
Buildergke-prow-containerd-pool-99179761-shq7
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a0a40b0d-a39c-48b6-9d47-10010d118d02/targets/test'}}
pod1a982d32-4a5f-11e9-ab9f-0a580a6c0a8e
resultstorehttps://source.cloud.google.com/results/invocations/a0a40b0d-a39c-48b6-9d47-10010d118d02/targets/test
infra-commit36eefcc45
pod1a982d32-4a5f-11e9-ab9f-0a580a6c0a8e
repok8s.io/kubernetes
repo-commit1d441c1f93e7cc44f8a200df28a2c8a4bee3a2bb
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0319 16:15:08.560236  106300 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0319 16:15:08.560311  106300 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0319 16:15:08.560338  106300 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0319 16:15:08.560360  106300 master.go:233] Using reconciler: 
I0319 16:15:08.563353  106300 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.563578  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.563652  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.563753  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.563838  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.564449  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.564644  106300 store.go:1319] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0319 16:15:08.564705  106300 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.564907  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.564924  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.565037  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.565161  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.565210  106300 reflector.go:161] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0319 16:15:08.565409  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.565824  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.566250  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.566708  106300 store.go:1319] Monitoring events count at <storage-prefix>//events
I0319 16:15:08.567602  106300 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.567768  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.567852  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.567919  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.568443  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.584636  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.584902  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.584969  106300 store.go:1319] Monitoring limitranges count at <storage-prefix>//limitranges
I0319 16:15:08.585020  106300 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.585042  106300 reflector.go:161] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0319 16:15:08.585181  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.585195  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.585236  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.585386  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.587057  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.587168  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.587361  106300 store.go:1319] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0319 16:15:08.587425  106300 reflector.go:161] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0319 16:15:08.587589  106300 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.587684  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.587696  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.587734  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.587793  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.596202  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.596532  106300 store.go:1319] Monitoring secrets count at <storage-prefix>//secrets
I0319 16:15:08.597150  106300 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.599606  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.599625  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.599671  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.598446  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.598586  106300 reflector.go:161] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0319 16:15:08.599869  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.600219  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.600468  106300 store.go:1319] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0319 16:15:08.600498  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.600575  106300 reflector.go:161] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0319 16:15:08.600662  106300 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.600779  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.600791  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.600829  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.600889  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.601261  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.601313  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.601412  106300 store.go:1319] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0319 16:15:08.601487  106300 reflector.go:161] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0319 16:15:08.601644  106300 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.602590  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.602608  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.602654  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.602710  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.604662  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.604826  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.605120  106300 store.go:1319] Monitoring configmaps count at <storage-prefix>//configmaps
I0319 16:15:08.605159  106300 reflector.go:161] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0319 16:15:08.606654  106300 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.606784  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.606803  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.606839  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.606896  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.607220  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.607363  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.608125  106300 store.go:1319] Monitoring namespaces count at <storage-prefix>//namespaces
I0319 16:15:08.608173  106300 reflector.go:161] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0319 16:15:08.608316  106300 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.608399  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.608413  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.609895  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.609974  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.610318  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.610427  106300 store.go:1319] Monitoring endpoints count at <storage-prefix>//endpoints
I0319 16:15:08.610618  106300 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.610700  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.610712  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.610752  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.610790  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.610853  106300 reflector.go:161] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0319 16:15:08.611132  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.611361  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.611441  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.611527  106300 store.go:1319] Monitoring nodes count at <storage-prefix>//nodes
I0319 16:15:08.612017  106300 reflector.go:161] Listing and watching *core.Node from storage/cacher.go:/nodes
I0319 16:15:08.614661  106300 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.614770  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.614785  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.614819  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.614891  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.616040  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.616512  106300 store.go:1319] Monitoring pods count at <storage-prefix>//pods
I0319 16:15:08.616980  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.617114  106300 reflector.go:161] Listing and watching *core.Pod from storage/cacher.go:/pods
I0319 16:15:08.617832  106300 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.617951  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.617963  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.618014  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.618396  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.618745  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.618891  106300 store.go:1319] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0319 16:15:08.618962  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.619042  106300 reflector.go:161] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0319 16:15:08.619174  106300 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.623182  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.623255  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.623589  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.623765  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.624252  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.624445  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.625271  106300 store.go:1319] Monitoring services count at <storage-prefix>//services
I0319 16:15:08.625319  106300 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.625427  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.625484  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.625525  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.625572  106300 reflector.go:161] Listing and watching *core.Service from storage/cacher.go:/services
I0319 16:15:08.625762  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.626217  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.626324  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.626335  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.626375  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.626448  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.626499  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.626856  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.627122  106300 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.627215  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.627226  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.627255  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.627316  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.627402  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.627881  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.628036  106300 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0319 16:15:08.628149  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.628181  106300 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0319 16:15:08.648850  106300 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0319 16:15:08.649005  106300 master.go:425] Enabling API group "authentication.k8s.io".
I0319 16:15:08.649030  106300 master.go:425] Enabling API group "authorization.k8s.io".
I0319 16:15:08.649397  106300 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.649700  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.649755  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.649838  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.649908  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.650677  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.650853  106300 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0319 16:15:08.651042  106300 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.651221  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.651236  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.651326  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.651402  106300 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0319 16:15:08.651594  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.651677  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.652017  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.652195  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.652569  106300 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0319 16:15:08.652725  106300 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0319 16:15:08.659232  106300 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.659393  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.659508  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.659662  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.659781  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.660369  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.660505  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.660753  106300 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0319 16:15:08.660930  106300 master.go:425] Enabling API group "autoscaling".
I0319 16:15:08.660866  106300 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0319 16:15:08.663956  106300 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.664130  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.664146  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.664190  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.664291  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.664899  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.664981  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.665257  106300 store.go:1319] Monitoring jobs.batch count at <storage-prefix>//jobs
I0319 16:15:08.665345  106300 reflector.go:161] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0319 16:15:08.665542  106300 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.665695  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.665738  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.665786  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.665850  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.666784  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.666822  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.667313  106300 store.go:1319] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0319 16:15:08.667342  106300 reflector.go:161] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0319 16:15:08.667390  106300 master.go:425] Enabling API group "batch".
I0319 16:15:08.667664  106300 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.667883  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.667932  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.667982  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.668146  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.668495  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.668568  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.668669  106300 store.go:1319] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0319 16:15:08.668695  106300 master.go:425] Enabling API group "certificates.k8s.io".
I0319 16:15:08.668863  106300 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.669039  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.669050  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.669157  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.669221  106300 reflector.go:161] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0319 16:15:08.669541  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.669961  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.670258  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.670336  106300 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0319 16:15:08.670414  106300 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0319 16:15:08.670850  106300 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.670950  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.670964  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.671020  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.671261  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.671585  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.671681  106300 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0319 16:15:08.671695  106300 master.go:425] Enabling API group "coordination.k8s.io".
I0319 16:15:08.671850  106300 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.671944  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.671958  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.671991  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.672327  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.672365  106300 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0319 16:15:08.672569  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.672848  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.672937  106300 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0319 16:15:08.672938  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.672996  106300 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0319 16:15:08.673170  106300 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.673241  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.673250  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.673279  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.673615  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.675512  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.675555  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.675666  106300 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0319 16:15:08.675736  106300 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0319 16:15:08.675863  106300 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.675981  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.675997  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.676033  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.676169  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.676495  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.676695  106300 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0319 16:15:08.676874  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.676974  106300 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0319 16:15:08.677038  106300 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.677204  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.677250  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.677374  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.677481  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.677771  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.678129  106300 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0319 16:15:08.678372  106300 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.678419  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.678533  106300 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0319 16:15:08.678847  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.678923  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.678997  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.679830  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.680311  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.681725  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.682516  106300 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0319 16:15:08.682628  106300 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0319 16:15:08.682780  106300 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.682855  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.682874  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.683243  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.683329  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.683713  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.683763  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.683876  106300 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0319 16:15:08.684050  106300 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.684180  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.684192  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.684223  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.684292  106300 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0319 16:15:08.684376  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.684907  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.685034  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.685181  106300 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0319 16:15:08.685203  106300 master.go:425] Enabling API group "extensions".
I0319 16:15:08.685213  106300 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0319 16:15:08.685394  106300 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.685499  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.685514  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.685574  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.685625  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.685882  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.685992  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.687246  106300 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0319 16:15:08.687431  106300 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.687521  106300 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0319 16:15:08.687551  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.687568  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.687603  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.688388  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.688699  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.688785  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.688824  106300 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0319 16:15:08.688842  106300 master.go:425] Enabling API group "networking.k8s.io".
I0319 16:15:08.688864  106300 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0319 16:15:08.688879  106300 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.688961  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.688973  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.689030  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.689147  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.689498  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.689750  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.690828  106300 store.go:1319] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0319 16:15:08.690860  106300 master.go:425] Enabling API group "node.k8s.io".
I0319 16:15:08.691143  106300 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.691225  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.691235  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.691268  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.691305  106300 reflector.go:161] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0319 16:15:08.691961  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.692370  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.692519  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.692657  106300 store.go:1319] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0319 16:15:08.692831  106300 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.692947  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.692964  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.692993  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.693033  106300 reflector.go:161] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0319 16:15:08.693278  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.693615  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.693760  106300 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0319 16:15:08.693785  106300 master.go:425] Enabling API group "policy".
I0319 16:15:08.693883  106300 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.694019  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.694039  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.694144  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.694198  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.694234  106300 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0319 16:15:08.694444  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.695563  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.695768  106300 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0319 16:15:08.695985  106300 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.696223  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.696267  106300 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0319 16:15:08.696557  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.696607  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.696817  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.697003  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.697491  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.697769  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.697861  106300 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0319 16:15:08.697889  106300 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0319 16:15:08.697898  106300 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.697996  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.698007  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.698135  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.698185  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.699412  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.699504  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.699926  106300 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0319 16:15:08.699986  106300 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0319 16:15:08.700375  106300 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.700465  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.700481  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.700532  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.700665  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.701903  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.702040  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.702249  106300 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0319 16:15:08.702321  106300 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.702398  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.702435  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.702480  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.702526  106300 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0319 16:15:08.702728  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.702985  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.703040  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.703139  106300 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0319 16:15:08.703160  106300 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0319 16:15:08.703430  106300 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.703780  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.703818  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.703867  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.703965  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.706165  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.706373  106300 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0319 16:15:08.706411  106300 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.706557  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.706576  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.706605  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.706661  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.706720  106300 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0319 16:15:08.706835  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.707331  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.707494  106300 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0319 16:15:08.707748  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.707813  106300 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0319 16:15:08.708169  106300 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.708273  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.708294  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.708336  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.708389  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.709293  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.709365  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.709433  106300 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0319 16:15:08.709496  106300 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0319 16:15:08.709525  106300 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0319 16:15:08.712444  106300 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.712739  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.712785  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.712841  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.712960  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.713672  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.713757  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.713963  106300 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0319 16:15:08.714231  106300 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.714365  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.714379  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.714421  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.714809  106300 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0319 16:15:08.715300  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.732856  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.733507  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.735361  106300 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0319 16:15:08.735688  106300 master.go:425] Enabling API group "scheduling.k8s.io".
I0319 16:15:08.735486  106300 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0319 16:15:08.736954  106300 master.go:417] Skipping disabled API group "settings.k8s.io".
I0319 16:15:08.737883  106300 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.740118  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.743161  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.743337  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.743815  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.747312  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.747369  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.747511  106300 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0319 16:15:08.747557  106300 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.747606  106300 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0319 16:15:08.747680  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.747692  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.747727  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.747847  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.748202  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.748347  106300 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0319 16:15:08.748389  106300 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.748568  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.748588  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.748644  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.748566  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.748697  106300 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0319 16:15:08.748729  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.749166  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.749271  106300 store.go:1319] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0319 16:15:08.749305  106300 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.749390  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.749399  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.749433  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.749554  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.749586  106300 reflector.go:161] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0319 16:15:08.749977  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.750338  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.750438  106300 store.go:1319] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0319 16:15:08.750598  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.750666  106300 reflector.go:161] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0319 16:15:08.752782  106300 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.752876  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.752888  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.752921  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.753189  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.753706  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.753813  106300 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0319 16:15:08.753847  106300 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.753914  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.753923  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.753960  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.754000  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.754026  106300 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0319 16:15:08.754327  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.754645  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.754752  106300 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0319 16:15:08.754771  106300 master.go:425] Enabling API group "storage.k8s.io".
I0319 16:15:08.754981  106300 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.755050  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.755060  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.755138  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.755181  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.756238  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.756494  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.756649  106300 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0319 16:15:08.756838  106300 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.756910  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.756919  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.756966  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.757008  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.757059  106300 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0319 16:15:08.757293  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.758986  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.759176  106300 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0319 16:15:08.759377  106300 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.759698  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.759713  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.759785  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.759838  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.759868  106300 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0319 16:15:08.760167  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.760875  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.760983  106300 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0319 16:15:08.761220  106300 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.761296  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.761306  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.761343  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.761392  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.761425  106300 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0319 16:15:08.761773  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.762057  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.762411  106300 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0319 16:15:08.762619  106300 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.762689  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.762699  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.762759  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.762837  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.762866  106300 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0319 16:15:08.763129  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.763473  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.763603  106300 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0319 16:15:08.763745  106300 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.763818  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.763830  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.763868  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.763967  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.763994  106300 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0319 16:15:08.764298  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.764663  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.764783  106300 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0319 16:15:08.764837  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.764921  106300 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0319 16:15:08.764959  106300 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.765033  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.765043  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.765122  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.765204  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.765719  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.765759  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.765852  106300 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0319 16:15:08.765891  106300 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0319 16:15:08.766044  106300 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.766166  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.766178  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.766222  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.766300  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.766648  106300 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0319 16:15:08.766821  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.766917  106300 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0319 16:15:08.767380  106300 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.767477  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.767490  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.767527  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.767569  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.767608  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.767715  106300 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0319 16:15:08.767930  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.767970  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.768050  106300 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0319 16:15:08.768133  106300 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0319 16:15:08.768271  106300 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.768341  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.768353  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.768384  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.774590  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.775181  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.775246  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.775391  106300 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0319 16:15:08.775565  106300 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0319 16:15:08.775621  106300 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.775736  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.775750  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.775787  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.775920  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.776328  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.776825  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.777221  106300 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0319 16:15:08.777559  106300 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.777588  106300 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0319 16:15:08.777689  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.777702  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.777737  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.778315  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.778694  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.778834  106300 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0319 16:15:08.779015  106300 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.779129  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.779140  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.779172  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.779248  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.779273  106300 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0319 16:15:08.779506  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.779732  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.779817  106300 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0319 16:15:08.779835  106300 master.go:425] Enabling API group "apps".
I0319 16:15:08.779864  106300 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.779925  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.779934  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.779971  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.780001  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.780023  106300 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0319 16:15:08.780293  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.780603  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.780676  106300 store.go:1319] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0319 16:15:08.780697  106300 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.780747  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.780756  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.780804  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.780833  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.780855  106300 reflector.go:161] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0319 16:15:08.781109  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.781316  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.781378  106300 store.go:1319] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0319 16:15:08.781396  106300 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0319 16:15:08.781421  106300 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"8dde30a6-dc9e-4d9e-9626-b7b203c61687", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0319 16:15:08.781622  106300 client.go:352] parsed scheme: ""
I0319 16:15:08.781634  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:08.781660  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:08.781699  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.781919  106300 reflector.go:161] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0319 16:15:08.782230  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.782425  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:08.782445  106300 store.go:1319] Monitoring events count at <storage-prefix>//events
I0319 16:15:08.782639  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:08.785864  106300 master.go:425] Enabling API group "events.k8s.io".
W0319 16:15:08.835242  106300 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
W0319 16:15:08.941403  106300 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0319 16:15:08.968403  106300 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0319 16:15:08.977307  106300 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0319 16:15:08.990974  106300 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0319 16:15:09.036337  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.036369  106300 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0319 16:15:09.036378  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.036394  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.036401  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.036557  106300 wrap.go:47] GET /healthz: (425.721µs) 500
goroutine 29699 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093c700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093c700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00efad9a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f1a5fd8, 0xc00babeea0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f1a5fd8, 0xc00eb77e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107e7c20, 0xc00ea78ea0, 0x75f4ac0, 0xc00f1a5fd8, 0xc00eb77e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35148]
I0319 16:15:09.037921  106300 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.791894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:09.041169  106300 wrap.go:47] GET /api/v1/services: (1.530201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:09.048244  106300 wrap.go:47] GET /api/v1/services: (2.460103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:09.056370  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.056402  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.056413  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.056421  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.056574  106300 wrap.go:47] GET /healthz: (368.989µs) 500
goroutine 29676 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011dd2cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011dd2cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011deed00, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011d7a5a8, 0xc001c42a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011d7a5a8, 0xc011e94000)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011d7a5a8, 0xc011e94000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011dce9c0, 0xc00ea78ea0, 0x75f4ac0, 0xc011d7a5a8, 0xc011e94000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:09.057646  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.347368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35148]
I0319 16:15:09.060034  106300 wrap.go:47] GET /api/v1/services: (1.662722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:09.060273  106300 wrap.go:47] GET /api/v1/services: (1.458036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.062781  106300 wrap.go:47] POST /api/v1/namespaces: (4.527386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35148]
I0319 16:15:09.065046  106300 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.380983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.067414  106300 wrap.go:47] POST /api/v1/namespaces: (1.858905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.069039  106300 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.201012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.076153  106300 wrap.go:47] POST /api/v1/namespaces: (6.573299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.137497  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.137552  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.137564  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.137573  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.137732  106300 wrap.go:47] GET /healthz: (405.816µs) 500
goroutine 29707 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011e6d820, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e681a8, 0xc011e2a300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e681a8, 0xc011e3df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e681a8, 0xc011e3de00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e681a8, 0xc011e3de00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7cae0, 0xc00ea78ea0, 0x75f4ac0, 0xc011e681a8, 0xc011e3de00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.157471  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.157520  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.157545  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.157553  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.157697  106300 wrap.go:47] GET /healthz: (394.843µs) 500
goroutine 29719 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099cf50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099cf50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011edcd60, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4038, 0xc0083b9380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4038, 0xc01096bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4038, 0xc01096be00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4038, 0xc01096be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e26a20, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4038, 0xc01096be00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.237595  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.237642  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.237655  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.237663  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.237824  106300 wrap.go:47] GET /healthz: (422.083µs) 500
goroutine 29721 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099d030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099d030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011edcf80, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4040, 0xc0083b9980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4040, 0xc011f28300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4040, 0xc011f28200)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4040, 0xc011f28200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e26b40, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4040, 0xc011f28200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.257440  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.257496  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.257507  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.257516  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.257689  106300 wrap.go:47] GET /healthz: (406.63µs) 500
goroutine 29709 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011e6db00, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e681f0, 0xc011e2a900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e681f0, 0xc011f12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e681f0, 0xc011f12700)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e681f0, 0xc011f12700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7cd80, 0xc00ea78ea0, 0x75f4ac0, 0xc011e681f0, 0xc011f12700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.337563  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.337603  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.337615  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.337623  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.337800  106300 wrap.go:47] GET /healthz: (423.715µs) 500
goroutine 29723 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099d110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099d110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011edd020, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4048, 0xc0083b9e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4048, 0xc011f28700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4048, 0xc011f28600)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4048, 0xc011f28600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e26c00, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4048, 0xc011f28600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.357560  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.357620  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.357632  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.357640  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.357800  106300 wrap.go:47] GET /healthz: (407.411µs) 500
goroutine 29711 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011e6dba0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e681f8, 0xc011e2ad80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e681f8, 0xc011f12c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e681f8, 0xc011f12b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e681f8, 0xc011f12b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7ce40, 0xc00ea78ea0, 0x75f4ac0, 0xc011e681f8, 0xc011f12b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.437799  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.437842  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.437855  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.437863  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.438031  106300 wrap.go:47] GET /healthz: (682.376µs) 500
goroutine 29713 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011e6dc40, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e68200, 0xc011e2b200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e68200, 0xc011f13000)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e68200, 0xc011f13000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e68200, 0xc011f13000)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e68200, 0xc011f13000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e68200, 0xc011f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e68200, 0xc011f12f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e68200, 0xc011f12f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7cf00, 0xc00ea78ea0, 0x75f4ac0, 0xc011e68200, 0xc011f12f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.458116  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.458158  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.458185  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.458194  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.458343  106300 wrap.go:47] GET /healthz: (422.471µs) 500
goroutine 29725 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099d1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099d1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011edd420, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4070, 0xc011f60600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4070, 0xc011f28e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4070, 0xc011f28d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4070, 0xc011f28d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e26e40, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4070, 0xc011f28d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.537474  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.537508  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.537519  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.537526  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.537678  106300 wrap.go:47] GET /healthz: (387.401µs) 500
goroutine 29621 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b9f110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b9f110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fc40960, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc000377a68, 0xc00110c480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc000377a68, 0xc010bad900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc000377a68, 0xc010bad900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc000377a68, 0xc010bad900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc000377a68, 0xc010bad900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc000377a68, 0xc010bad900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc000377a68, 0xc010bad800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc000377a68, 0xc010bad800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0111711a0, 0xc00ea78ea0, 0x75f4ac0, 0xc000377a68, 0xc010bad800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.559967  106300 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0319 16:15:09.560012  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.560023  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.560030  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.560536  106300 wrap.go:47] GET /healthz: (709.201µs) 500
goroutine 29727 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099d2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099d2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011edd640, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4078, 0xc011f60c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4078, 0xc011f29200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4078, 0xc011f29100)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4078, 0xc011f29100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e26f60, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4078, 0xc011f29100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.565698  106300 client.go:352] parsed scheme: ""
I0319 16:15:09.565728  106300 client.go:352] scheme "" not registered, fallback to default scheme
I0319 16:15:09.565787  106300 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0319 16:15:09.565838  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:09.566393  106300 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0319 16:15:09.566486  106300 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0319 16:15:09.638756  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.638791  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.638800  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.638971  106300 wrap.go:47] GET /healthz: (1.722638ms) 500
goroutine 29697 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d316c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d316c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011eea300, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00e7cf5b8, 0xc0085ccf20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00e7cf5b8, 0xc01102f800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e0ee40, 0xc00ea78ea0, 0x75f4ac0, 0xc00e7cf5b8, 0xc01102f800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.658557  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.658612  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.658622  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.658793  106300 wrap.go:47] GET /healthz: (1.438699ms) 500
goroutine 29658 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010391f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010391f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011e750a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016df40, 0xc006d5d760, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016df40, 0xc011ed0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016df40, 0xc011ed0b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016df40, 0xc011ed0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011529380, 0xc00ea78ea0, 0x75f4ac0, 0xc01016df40, 0xc011ed0b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.739120  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.739176  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.739185  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.739344  106300 wrap.go:47] GET /healthz: (2.062972ms) 500
goroutine 29763 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d31880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d31880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011eea820, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00e7cf5e8, 0xc00945b600, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01203c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00e7cf5e8, 0xc01102ff00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00e7cf5e8, 0xc01102ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e0f260, 0xc00ea78ea0, 0x75f4ac0, 0xc00e7cf5e8, 0xc01102ff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.758907  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.758945  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.758953  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.759195  106300 wrap.go:47] GET /healthz: (1.855976ms) 500
goroutine 29779 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0120340c0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e68208, 0xc000de2420, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e68208, 0xc011f13400)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e68208, 0xc011f13400)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e68208, 0xc011f13400)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e68208, 0xc011f13400)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e68208, 0xc011f13400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e68208, 0xc011f13300)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e68208, 0xc011f13300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7cfc0, 0xc00ea78ea0, 0x75f4ac0, 0xc011e68208, 0xc011f13300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.839010  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.839054  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.839099  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.839283  106300 wrap.go:47] GET /healthz: (1.93128ms) 500
goroutine 29781 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012034680, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e68240, 0xc0085cd1e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e68240, 0xc011f13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e68240, 0xc011f13a00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e68240, 0xc011f13a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7d500, 0xc00ea78ea0, 0x75f4ac0, 0xc011e68240, 0xc011f13a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.859506  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.859541  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.859550  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.859726  106300 wrap.go:47] GET /healthz: (2.372695ms) 500
goroutine 29783 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01093d650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01093d650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012034860, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011e68250, 0xc0085cd4a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011e68250, 0xc011f13f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011e68250, 0xc011f13e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011e68250, 0xc011f13e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e7d7a0, 0xc00ea78ea0, 0x75f4ac0, 0xc011e68250, 0xc011f13e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:09.938786  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.938824  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.938834  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.939017  106300 wrap.go:47] GET /healthz: (1.595415ms) 500
goroutine 29752 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099d570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099d570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0120842a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4108, 0xc00945b8c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4108, 0xc011f29e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4108, 0xc011f29d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4108, 0xc011f29d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e27aa0, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4108, 0xc011f29d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35152]
I0319 16:15:09.958992  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:09.959034  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:09.959044  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:09.959244  106300 wrap.go:47] GET /healthz: (1.8894ms) 500
goroutine 29754 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01099d650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01099d650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0120844c0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011ee4118, 0xc00eb449a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011ee4118, 0xc0120ae100)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011ee4118, 0xc0120ae100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e27d40, 0xc00ea78ea0, 0x75f4ac0, 0xc011ee4118, 0xc0120ae100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:10.037725  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.796453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:10.038001  106300 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.074491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35152]
I0319 16:15:10.038026  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.688737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35160]
I0319 16:15:10.038523  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.038543  106300 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0319 16:15:10.038552  106300 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0319 16:15:10.038703  106300 wrap.go:47] GET /healthz: (1.33632ms) 500
goroutine 29769 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d31b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d31b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011eeb3a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00e7cf6c8, 0xc000de2840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d100)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00e7cf6c8, 0xc01203d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e0fc80, 0xc00ea78ea0, 0x75f4ac0, 0xc00e7cf6c8, 0xc01203d100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35162]
I0319 16:15:10.040142  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.374597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:10.040189  106300 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.370932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35160]
I0319 16:15:10.041035  106300 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.238459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.041299  106300 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
I0319 16:15:10.043248  106300 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.732395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.043542  106300 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (2.701701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.045365  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (4.792787ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:10.047196  106300 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.54527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.047410  106300 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
I0319 16:15:10.047475  106300 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0319 16:15:10.048694  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.79946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35150]
I0319 16:15:10.050575  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.257472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.052403  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.255599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.056252  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (3.334876ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.057850  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.092291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.058108  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.058297  106300 wrap.go:47] GET /healthz: (1.061323ms) 500
goroutine 29612 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011294cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011294cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012132680, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00b733f78, 0xc00dfc5040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00b733f78, 0xc0112a1000)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00b733f78, 0xc0112a1000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0112caa20, 0xc00ea78ea0, 0x75f4ac0, 0xc00b733f78, 0xc0112a1000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.059349  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (960.371µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.061258  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.433701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.065006  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.339392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.065264  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0319 16:15:10.066796  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.316945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.069414  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.988534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.070000  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0319 16:15:10.071509  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.242044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.074210  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.158993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.074424  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0319 16:15:10.079636  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (4.954736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.083217  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.893315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.083499  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0319 16:15:10.084685  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (969.982µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.087045  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.087429  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0319 16:15:10.088592  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (940.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.091121  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.082989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.091392  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0319 16:15:10.092637  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.069847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.095171  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.183674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.095398  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0319 16:15:10.096908  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.220227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.100166  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.710373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.100484  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0319 16:15:10.102016  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.299625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.107641  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.800164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.108160  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0319 16:15:10.109918  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.410552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.113886  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.109333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.114285  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0319 16:15:10.116291  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.791594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.119052  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.146217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.119337  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0319 16:15:10.120745  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.155262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.126124  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.520135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.126495  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0319 16:15:10.128366  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.596212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.130794  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.992411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.131142  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0319 16:15:10.132599  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.180452ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.135349  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.293369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.138283  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0319 16:15:10.139220  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.139433  106300 wrap.go:47] GET /healthz: (1.694399ms) 500
goroutine 29876 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01232c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01232c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012363140, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc011d7a9c0, 0xc0087feb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc011d7a9c0, 0xc01230b500)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc011d7a9c0, 0xc01230b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0123425a0, 0xc00ea78ea0, 0x75f4ac0, 0xc011d7a9c0, 0xc01230b500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35162]
I0319 16:15:10.139835  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.119968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.144875  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.33507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.145279  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0319 16:15:10.147247  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.623076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.150427  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.574737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.150804  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0319 16:15:10.152623  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.586312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.155324  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.179772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.155686  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0319 16:15:10.157704  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.774739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.158150  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.158518  106300 wrap.go:47] GET /healthz: (1.369535ms) 500
goroutine 29863 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0122b6770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0122b6770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0122b52a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0121e0398, 0xc01245a000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0121e0398, 0xc012448500)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0121e0398, 0xc012448500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0121e0398, 0xc012448500)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0121e0398, 0xc012448500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0121e0398, 0xc012448500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0121e0398, 0xc012448400)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0121e0398, 0xc012448400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0112cbf20, 0xc00ea78ea0, 0x75f4ac0, 0xc0121e0398, 0xc012448400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.161133  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.436299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.161921  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0319 16:15:10.164044  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.665999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.170172  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.962881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.170664  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0319 16:15:10.172135  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.238993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.175262  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.585404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.175571  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0319 16:15:10.176945  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.142549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.179298  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.860292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.179515  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0319 16:15:10.183888  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (4.146115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.191202  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.564146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.191526  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0319 16:15:10.195303  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (3.386613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.200807  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.776484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.201234  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0319 16:15:10.203339  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.550599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.208750  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.078157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.209316  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0319 16:15:10.210681  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.091936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.213256  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.914583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.218410  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0319 16:15:10.230763  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (9.475752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.250202  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (15.292081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.250630  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0319 16:15:10.262251  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.262839  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.266004  106300 wrap.go:47] GET /healthz: (7.543533ms) 500
goroutine 29919 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012403dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012403dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0124eb840, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0124ee148, 0xc009aca8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0124ee148, 0xc0124fe700)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0124ee148, 0xc0124fe700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01248d8c0, 0xc00ea78ea0, 0x75f4ac0, 0xc0124ee148, 0xc0124fe700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.266849  106300 wrap.go:47] GET /healthz: (9.938112ms) 500
goroutine 29931 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012572310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012572310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012570b40, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01217aba0, 0xc0095bab40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01217aba0, 0xc012610900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01217aba0, 0xc012610900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01217aba0, 0xc012610900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01217aba0, 0xc012610900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01217aba0, 0xc012610900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01217aba0, 0xc012610800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01217aba0, 0xc012610800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012512de0, 0xc00ea78ea0, 0x75f4ac0, 0xc01217aba0, 0xc012610800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35162]
I0319 16:15:10.291425  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (40.112172ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35164]
I0319 16:15:10.295621  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.176241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.296868  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0319 16:15:10.307812  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (10.074196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.330777  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.859444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.337591  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0319 16:15:10.338948  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.340637  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (2.381734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.343482  106300 wrap.go:47] GET /healthz: (3.756141ms) 500
goroutine 29632 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0125642a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0125642a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0125683c0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc000377e00, 0xc0098d8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc000377e00, 0xc012566700)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc000377e00, 0xc012566700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc000377e00, 0xc012566700)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc000377e00, 0xc012566700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc000377e00, 0xc012566700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc000377e00, 0xc012566600)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc000377e00, 0xc012566600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0124e6780, 0xc00ea78ea0, 0x75f4ac0, 0xc000377e00, 0xc012566600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:10.344011  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.71632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.344512  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0319 16:15:10.348158  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (3.267308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.351614  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.449885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.351893  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0319 16:15:10.353384  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.208484ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.355773  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.841585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.356166  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0319 16:15:10.357573  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.166882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.360302  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.299036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.360729  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0319 16:15:10.361122  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.361336  106300 wrap.go:47] GET /healthz: (2.464691ms) 500
goroutine 29934 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0101fc150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0101fc150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01194c160, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0113a8068, 0xc0021308c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0113a8068, 0xc00596e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0113a8068, 0xc00596e300)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0113a8068, 0xc00596e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006666300, 0xc00ea78ea0, 0x75f4ac0, 0xc0113a8068, 0xc00596e300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.386901  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (24.848163ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.390935  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.314806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.391257  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0319 16:15:10.392789  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.348503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.396337  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.081355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.396670  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0319 16:15:10.398209  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.13972ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.400873  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.252732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.401190  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0319 16:15:10.404162  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (2.801366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.407764  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.723966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.408180  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0319 16:15:10.409791  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.251783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.413276  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.84604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.413613  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0319 16:15:10.414953  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.030138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.418236  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.55533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.418748  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0319 16:15:10.420418  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.259587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.428250  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.349116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.428747  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0319 16:15:10.430508  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.487696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.432862  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.858452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.433411  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0319 16:15:10.434666  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.062985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.437766  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.160928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.438313  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0319 16:15:10.439036  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.440153  106300 wrap.go:47] GET /healthz: (1.987549ms) 500
goroutine 29962 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f736af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f736af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f9fd2e0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eb82320, 0xc00262ab40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eb82320, 0xc002729900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eb82320, 0xc002729900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eb82320, 0xc002729900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eb82320, 0xc002729900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eb82320, 0xc002729900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eb82320, 0xc002729800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eb82320, 0xc002729800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e0f8d80, 0xc00ea78ea0, 0x75f4ac0, 0xc00eb82320, 0xc002729800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:10.440289  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.373183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.443699  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.858331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.444050  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0319 16:15:10.445552  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.059589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.448894  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.374989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.449145  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0319 16:15:10.450481  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.080649ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.453754  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.494036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.454152  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0319 16:15:10.455909  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.456066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.458514  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.968622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.458854  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0319 16:15:10.459345  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.459838  106300 wrap.go:47] GET /healthz: (2.406197ms) 500
goroutine 30050 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0101eb5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0101eb5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f6f22e0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016ca38, 0xc002130dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016ca38, 0xc00353d900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016ca38, 0xc00353d800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016ca38, 0xc00353d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e7c31a0, 0xc00ea78ea0, 0x75f4ac0, 0xc01016ca38, 0xc00353d800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.461119  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.901831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.464980  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.280493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.465421  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0319 16:15:10.468568  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (2.569936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.471162  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.974061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.471398  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0319 16:15:10.472762  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.130902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.475659  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.283744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.475915  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0319 16:15:10.477306  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.120317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.480049  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.084193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.483009  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0319 16:15:10.484851  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.535026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.487762  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.16375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.488142  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0319 16:15:10.489616  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.243333ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.492334  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.082528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.492695  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0319 16:15:10.494253  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.308377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.497322  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.536176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.497627  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0319 16:15:10.499296  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.395176ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.502368  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.471322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.502810  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0319 16:15:10.506401  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (3.227445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.509021  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.049926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.509717  106300 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0319 16:15:10.511407  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.377665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.514823  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.977462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.515366  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0319 16:15:10.516744  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.122707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.519523  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.519850  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0319 16:15:10.521336  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.20504ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.526103  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.214348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.526441  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0319 16:15:10.527924  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.213068ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.539427  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.19693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.539552  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.539828  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0319 16:15:10.540046  106300 wrap.go:47] GET /healthz: (2.683645ms) 500
goroutine 30048 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f2ea770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f2ea770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f513280, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0121e0858, 0xc002131400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0121e0858, 0xc004efbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0121e0858, 0xc004efba00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0121e0858, 0xc004efba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0054ff860, 0xc00ea78ea0, 0x75f4ac0, 0xc0121e0858, 0xc004efba00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:10.558446  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (2.100843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.559108  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.559343  106300 wrap.go:47] GET /healthz: (1.736741ms) 500
goroutine 30124 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f692fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f692fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4f73c0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eb82d60, 0xc003078780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4400)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eb82d60, 0xc00a0f4400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049f20c0, 0xc00ea78ea0, 0x75f4ac0, 0xc00eb82d60, 0xc00a0f4400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.579415  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.887669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.579749  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0319 16:15:10.599206  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.752306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.619299  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.020092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.619638  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0319 16:15:10.638179  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.638277  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.9558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.638401  106300 wrap.go:47] GET /healthz: (1.256192ms) 500
goroutine 30132 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f9d1ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f9d1ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f57aaa0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0115b9420, 0xc00262be00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0115b9420, 0xc002960b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0115b9420, 0xc002960a00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0115b9420, 0xc002960a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002993320, 0xc00ea78ea0, 0x75f4ac0, 0xc0115b9420, 0xc002960a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:10.661568  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.392129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.663289  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0319 16:15:10.663374  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.663606  106300 wrap.go:47] GET /healthz: (6.147173ms) 500
goroutine 30058 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5d8e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5d8e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4bea00, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016d168, 0xc003078b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016d168, 0xc002ac8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016d168, 0xc002ac8000)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016d168, 0xc002ac8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004802540, 0xc00ea78ea0, 0x75f4ac0, 0xc01016d168, 0xc002ac8000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.677904  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.622413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.699284  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.864072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.699738  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0319 16:15:10.717779  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.551083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.738900  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.739291  106300 wrap.go:47] GET /healthz: (1.87863ms) 500
goroutine 30070 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0101fdb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0101fdb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f559ae0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0113a8668, 0xc001f4c8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0113a8668, 0xc0078e7b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0113a8668, 0xc0078e7b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f65f800, 0xc00ea78ea0, 0x75f4ac0, 0xc0113a8668, 0xc0078e7b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:10.740609  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.975062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.741415  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0319 16:15:10.758295  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.046136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.758496  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.758819  106300 wrap.go:47] GET /healthz: (1.56546ms) 500
goroutine 30062 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5d92d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5d92d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4bf7e0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016d3b8, 0xc001f4cf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016d3b8, 0xc002ac8a00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016d3b8, 0xc002ac8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b0de9c0, 0xc00ea78ea0, 0x75f4ac0, 0xc01016d3b8, 0xc002ac8a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.779364  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.973834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.779768  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0319 16:15:10.798391  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.060614ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.819822  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.579716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.820216  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0319 16:15:10.838334  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.838550  106300 wrap.go:47] GET /healthz: (1.28636ms) 500
goroutine 30195 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f4502a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f4502a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f42cea0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc000c2a618, 0xc0087fe3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc000c2a618, 0xc000bf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc000c2a618, 0xc002a05f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc000c2a618, 0xc002a05f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bb40ea0, 0xc00ea78ea0, 0x75f4ac0, 0xc000c2a618, 0xc002a05f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35162]
I0319 16:15:10.838592  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.265797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.858697  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.858890  106300 wrap.go:47] GET /healthz: (1.742578ms) 500
goroutine 30129 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f693490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f693490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4824a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eb82e78, 0xc003079040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5100)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eb82e78, 0xc00a0f5100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049f3740, 0xc00ea78ea0, 0x75f4ac0, 0xc00eb82e78, 0xc00a0f5100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.860435  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.02439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.860826  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0319 16:15:10.879051  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (2.780331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.900919  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.455575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.901654  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0319 16:15:10.918885  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.439109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.939146  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.939798  106300 wrap.go:47] GET /healthz: (2.566625ms) 500
goroutine 30248 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f442850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f442850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f24e440, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0113a8aa0, 0xc003079400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae300)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0113a8aa0, 0xc0027ae300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b0cc240, 0xc00ea78ea0, 0x75f4ac0, 0xc0113a8aa0, 0xc0027ae300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35162]
I0319 16:15:10.939907  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.557748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.940364  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0319 16:15:10.958546  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.214062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:10.960785  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:10.961123  106300 wrap.go:47] GET /healthz: (1.830592ms) 500
goroutine 30220 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f5932d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f5932d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f2c9ae0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eab0a30, 0xc000079680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eab0a30, 0xc0039acb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eab0a30, 0xc0039aca00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eab0a30, 0xc0039aca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00adef140, 0xc00ea78ea0, 0x75f4ac0, 0xc00eab0a30, 0xc0039aca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.980490  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.058196ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:10.981239  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0319 16:15:10.998143  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.805463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:11.019405  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.133712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:11.019783  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0319 16:15:11.047759  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (11.090029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:11.060627  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.060843  106300 wrap.go:47] GET /healthz: (23.643789ms) 500
goroutine 30275 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f593ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f593ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f1b81a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eab0da0, 0xc001f4db80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eab0da0, 0xc000c42000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eab0da0, 0xc0039adf00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eab0da0, 0xc0039adf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00adefd40, 0xc00ea78ea0, 0x75f4ac0, 0xc00eab0da0, 0xc0039adf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:11.061228  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.061383  106300 wrap.go:47] GET /healthz: (2.617316ms) 500
goroutine 30278 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f593dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f593dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f1b8380, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eab0df8, 0xc009a36140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eab0df8, 0xc000a4e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eab0df8, 0xc000c43f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eab0df8, 0xc000c43f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a2c2300, 0xc00ea78ea0, 0x75f4ac0, 0xc00eab0df8, 0xc000c43f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.064628  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.46541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35162]
I0319 16:15:11.065108  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0319 16:15:11.077980  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.385567ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.099108  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.166765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.099408  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0319 16:15:11.138644  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (22.247986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.139181  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.139365  106300 wrap.go:47] GET /healthz: (2.129675ms) 500
goroutine 30272 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f3970a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f3970a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f0a2f60, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01194ba18, 0xc000079b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01194ba18, 0xc002184f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01194ba18, 0xc002184d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01194ba18, 0xc002184d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009f15560, 0xc00ea78ea0, 0x75f4ac0, 0xc01194ba18, 0xc002184d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:11.143705  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.304339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.143967  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0319 16:15:11.158133  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.232436ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.158969  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.159187  106300 wrap.go:47] GET /healthz: (1.56106ms) 500
goroutine 30293 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f332a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f332a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f1290e0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016dca8, 0xc00261ba40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016dca8, 0xc0014c3d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016dca8, 0xc0014c3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009dd68a0, 0xc00ea78ea0, 0x75f4ac0, 0xc01016dca8, 0xc0014c3d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.179450  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.652018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.179773  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0319 16:15:11.197382  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.228457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.219804  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.220127  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0319 16:15:11.238849  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.611116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.239653  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.239859  106300 wrap.go:47] GET /healthz: (2.67454ms) 500
goroutine 30309 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f3977a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f3977a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f0994a0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01194bef8, 0xc0030797c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01194bef8, 0xc0029e8100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01194bef8, 0xc001b2be00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01194bef8, 0xc001b2be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00af17bc0, 0xc00ea78ea0, 0x75f4ac0, 0xc01194bef8, 0xc001b2be00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35220]
I0319 16:15:11.259806  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.421424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.260226  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0319 16:15:11.260404  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.260667  106300 wrap.go:47] GET /healthz: (3.279098ms) 500
goroutine 30298 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f333030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f333030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f03a6c0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016de30, 0xc00b112280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016de30, 0xc0027ddd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016de30, 0xc0027ddc00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016de30, 0xc0027ddc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009b4c060, 0xc00ea78ea0, 0x75f4ac0, 0xc01016de30, 0xc0027ddc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.277576  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.425119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.298780  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.471924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.299319  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0319 16:15:11.318041  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.775679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.338477  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.338672  106300 wrap.go:47] GET /healthz: (1.505706ms) 500
goroutine 30302 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f333650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f333650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f03b960, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc01016def0, 0xc009a36640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc01016def0, 0xc002902700)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc01016def0, 0xc002902700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc01016def0, 0xc002902700)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc01016def0, 0xc002902700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc01016def0, 0xc002902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc01016def0, 0xc002902600)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc01016def0, 0xc002902600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0076be600, 0xc00ea78ea0, 0x75f4ac0, 0xc01016def0, 0xc002902600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35220]
I0319 16:15:11.339296  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.978509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.339529  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0319 16:15:11.358360  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.358613  106300 wrap.go:47] GET /healthz: (1.380124ms) 500
goroutine 30240 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f693dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f693dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00efad320, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eb83340, 0xc003079b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eb83340, 0xc000ab1500)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eb83340, 0xc000ab1500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00afaf800, 0xc00ea78ea0, 0x75f4ac0, 0xc00eb83340, 0xc000ab1500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.358831  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.938724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.378805  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.611933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.379102  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0319 16:15:11.397751  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.555935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.419189  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.845074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.419597  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0319 16:15:11.438191  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.438676  106300 wrap.go:47] GET /healthz: (1.479508ms) 500
goroutine 30357 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f333b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f333b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ef9dda0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f1a4010, 0xc00040e500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f1a4010, 0xc003371e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f1a4010, 0xc003371d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f1a4010, 0xc003371d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0076bfc20, 0xc00ea78ea0, 0x75f4ac0, 0xc00f1a4010, 0xc003371d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35220]
I0319 16:15:11.439013  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.828298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.459204  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.459243  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.986861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.459428  106300 wrap.go:47] GET /healthz: (2.253261ms) 500
goroutine 30316 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f1413b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f1413b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ef18f60, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00e7ce240, 0xc00040eb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00e7ce240, 0xc003b7cb00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00e7ce240, 0xc003b7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006b71aa0, 0xc00ea78ea0, 0x75f4ac0, 0xc00e7ce240, 0xc003b7cb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.459747  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0319 16:15:11.478042  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.79619ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.499017  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.743523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.499707  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0319 16:15:11.518131  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.857351ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.539334  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.15702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.539799  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0319 16:15:11.540434  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.540711  106300 wrap.go:47] GET /healthz: (3.443792ms) 500
goroutine 30347 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ef5a930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ef5a930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ee17800, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0113a96a8, 0xc00040f400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0113a96a8, 0xc003b33400)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0113a96a8, 0xc003b33400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a9bc360, 0xc00ea78ea0, 0x75f4ac0, 0xc0113a96a8, 0xc003b33400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:11.558300  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.959596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.558562  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.558959  106300 wrap.go:47] GET /healthz: (1.633371ms) 500
goroutine 30365 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f030460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f030460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ee3f6e0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f1a41c0, 0xc009a36c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f1a41c0, 0xc003cd2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004bbbe60, 0xc00ea78ea0, 0x75f4ac0, 0xc00f1a41c0, 0xc003cd2c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.580005  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.490577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.580420  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0319 16:15:11.598301  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.994807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.620037  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.672821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.620611  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0319 16:15:11.638816  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (2.584987ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.640022  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.640366  106300 wrap.go:47] GET /healthz: (3.23267ms) 500
goroutine 30374 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ef5b960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ef5b960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00edb7d60, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0113a9b10, 0xc0087fe8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0113a9b10, 0xc0054e8000)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0113a9b10, 0xc0054e8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005862060, 0xc00ea78ea0, 0x75f4ac0, 0xc0113a9b10, 0xc0054e8000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:11.659038  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.659305  106300 wrap.go:47] GET /healthz: (1.75839ms) 500
goroutine 30376 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ef5ba40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ef5ba40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ed8a020, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0113a9bc0, 0xc0087fedc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0113a9bc0, 0xc005614300)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0113a9bc0, 0xc005614300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0058623c0, 0xc00ea78ea0, 0x75f4ac0, 0xc0113a9bc0, 0xc005614300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.659741  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.653526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.660035  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0319 16:15:11.678124  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.726415ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.699999  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.678067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.700547  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0319 16:15:11.717855  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.553204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.738980  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.739328  106300 wrap.go:47] GET /healthz: (1.643771ms) 500
goroutine 30320 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ef28930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ef28930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00edffca0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00e7ce4e0, 0xc009a37540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a97000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00e7ce4e0, 0xc002a96f00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00e7ce4e0, 0xc002a96f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004cec2a0, 0xc00ea78ea0, 0x75f4ac0, 0xc00e7ce4e0, 0xc002a96f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35220]
I0319 16:15:11.744210  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.284964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.745550  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0319 16:15:11.760560  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (4.334845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:11.762862  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.763174  106300 wrap.go:47] GET /healthz: (6.01488ms) 500
goroutine 30336 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ece01c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ece01c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ec7e220, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc000c2b8f8, 0xc009a37900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5500)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc000c2b8f8, 0xc005bf5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007d4b980, 0xc00ea78ea0, 0x75f4ac0, 0xc000c2b8f8, 0xc005bf5500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.780354  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.724566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.780681  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0319 16:15:11.798205  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.003451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.820913  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.539564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.821259  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0319 16:15:11.841520  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (5.277266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.876163  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.876371  106300 wrap.go:47] GET /healthz: (39.226978ms) 500
goroutine 30382 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00eef1110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00eef1110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ec70960, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f0e8070, 0xc002c62780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f0e8070, 0xc00532fb00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f0e8070, 0xc00532fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005863da0, 0xc00ea78ea0, 0x75f4ac0, 0xc00f0e8070, 0xc00532fb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:11.884521  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.884726  106300 wrap.go:47] GET /healthz: (26.142636ms) 500
goroutine 30440 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f031f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f031f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ec98fc0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f1a4470, 0xc002e7f2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f1a4470, 0xc00684c100)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f1a4470, 0xc00684c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004bcf980, 0xc00ea78ea0, 0x75f4ac0, 0xc00f1a4470, 0xc00684c100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0319 16:15:11.884981  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (28.532968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35220]
I0319 16:15:11.885327  106300 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0319 16:15:11.891251  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (5.643894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0319 16:15:11.894770  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.972869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0319 16:15:11.900052  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.01732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0319 16:15:11.900330  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0319 16:15:11.971898  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.972184  106300 wrap.go:47] GET /healthz: (34.905561ms) 500
goroutine 30384 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00eef15e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00eef15e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ec71620, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f0e80f8, 0xc002e7f7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd000)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f0e80f8, 0xc0088fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0030329c0, 0xc00ea78ea0, 0x75f4ac0, 0xc00f0e80f8, 0xc0088fd000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:11.972598  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:11.972776  106300 wrap.go:47] GET /healthz: (15.181753ms) 500
goroutine 30398 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ecbad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ecbad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ec12fa0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0121e0ce0, 0xc002e7fcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0121e0ce0, 0xc008976000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0121e0ce0, 0xc0061bde00)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0121e0ce0, 0xc0061bde00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003359aa0, 0xc00ea78ea0, 0x75f4ac0, 0xc0121e0ce0, 0xc0061bde00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:11.975203  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (59.055194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35422]
I0319 16:15:11.977384  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.662332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:11.980269  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.076909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:11.980523  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0319 16:15:11.982051  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (983.198µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:11.984777  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.233483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:11.988485  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.988095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:11.989390  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0319 16:15:12.000770  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (4.543471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.003319  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.009972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.019374  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.173943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.019718  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0319 16:15:12.037856  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.62664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.043538  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (5.029186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.043759  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.043918  106300 wrap.go:47] GET /healthz: (4.627981ms) 500
goroutine 30471 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ea801c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ea801c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ead8aa0, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0121e0e80, 0xc00040fa40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0121e0e80, 0xc008a1a700)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0121e0e80, 0xc008a1a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00baf1560, 0xc00ea78ea0, 0x75f4ac0, 0xc0121e0e80, 0xc008a1a700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:12.059577  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.283736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.060606  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0319 16:15:12.060654  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.060848  106300 wrap.go:47] GET /healthz: (2.07934ms) 500
goroutine 30425 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ece1810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ece1810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ebf3180, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc000c2bc98, 0xc005112000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc000c2bc98, 0xc007b35700)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc000c2bc98, 0xc007b35700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00472eea0, 0xc00ea78ea0, 0x75f4ac0, 0xc000c2bc98, 0xc007b35700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.077818  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.576232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.080295  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.951576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.101271  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.896465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.101638  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0319 16:15:12.123831  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.617808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.126147  106300 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.530812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.139288  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.069079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.140572  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.140734  106300 wrap.go:47] GET /healthz: (1.27398ms) 500
goroutine 30286 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f1a90a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f1a90a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ed8f500, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eab1388, 0xc00dfc4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eab1388, 0xc00b4c0800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eab1388, 0xc00b4c0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009cc3380, 0xc00ea78ea0, 0x75f4ac0, 0xc00eab1388, 0xc00b4c0800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35480]
I0319 16:15:12.141590  106300 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0319 16:15:12.158673  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.158882  106300 wrap.go:47] GET /healthz: (1.198768ms) 500
goroutine 30484 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e98e8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e98e8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00eaaaf80, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f1a47d0, 0xc00584a500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4800)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f1a47d0, 0xc00b1e4800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002669ce0, 0xc00ea78ea0, 0x75f4ac0, 0xc00f1a47d0, 0xc00b1e4800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.159666  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (3.43252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.162909  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.550565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.179417  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.244988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.179888  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0319 16:15:12.197895  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.687608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.200181  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.785657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.219131  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.875769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.219440  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0319 16:15:12.237918  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.764298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.242261  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.831134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.242298  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.242825  106300 wrap.go:47] GET /healthz: (1.85605ms) 500
goroutine 30490 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e98f420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e98f420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e9a8b80, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00f1a48a8, 0xc00584aa00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00f1a48a8, 0xc00b664500)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00f1a48a8, 0xc00b664500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a5f7620, 0xc00ea78ea0, 0x75f4ac0, 0xc00f1a48a8, 0xc00b664500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35480]
I0319 16:15:12.258243  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.258901  106300 wrap.go:47] GET /healthz: (1.885963ms) 500
goroutine 30514 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e8c4150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e8c4150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e99a720, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00e7ce9b8, 0xc00584adc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18100)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00e7ce9b8, 0xc00ba18100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b100960, 0xc00ea78ea0, 0x75f4ac0, 0xc00e7ce9b8, 0xc00ba18100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.262941  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.673787ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.263224  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0319 16:15:12.287171  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (10.642471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.291920  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (4.088578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.298807  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.60704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.299151  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0319 16:15:12.317565  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.402719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.320666  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.665631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.338574  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.401133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.338858  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0319 16:15:12.339000  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.339238  106300 wrap.go:47] GET /healthz: (1.740056ms) 500
goroutine 30503 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e87e460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e87e460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e959460, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc0121e11d0, 0xc002c63900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9700)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc0121e11d0, 0xc00c9a9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009fb26c0, 0xc00ea78ea0, 0x75f4ac0, 0xc0121e11d0, 0xc00c9a9700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:35176]
I0319 16:15:12.359209  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (3.010533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.359354  106300 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0319 16:15:12.359546  106300 wrap.go:47] GET /healthz: (2.49883ms) 500
goroutine 30548 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ef1d030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ef1d030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ea4ad80, 0x1f4)
net/http.Error(0x7ff71a440380, 0xc00eb837a8, 0xc0087ff540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
net/http.HandlerFunc.ServeHTTP(0xc011d97c20, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc011da5840, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00df2b2d0, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4536181, 0xe, 0xc00d8e3dd0, 0xc00df2b2d0, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90340, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
net/http.HandlerFunc.ServeHTTP(0xc00f2e7e90, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
net/http.HandlerFunc.ServeHTTP(0xc00ec90380, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7ff71a440380, 0xc00eb837a8, 0xc0058a9400)
net/http.HandlerFunc.ServeHTTP(0xc00ec5a2d0, 0x7ff71a440380, 0xc00eb837a8, 0xc0058a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006dcf560, 0xc00ea78ea0, 0x75f4ac0, 0xc00eb837a8, 0xc0058a9400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.369579  106300 wrap.go:47] GET /api/v1/namespaces/kube-system: (5.202851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.379674  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.115048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.379905  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0319 16:15:12.397543  106300 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.304235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.399792  106300 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.828611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.418676  106300 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.424197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.420536  106300 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0319 16:15:12.438428  106300 wrap.go:47] GET /healthz: (1.099684ms) 200 [Go-http-client/1.1 127.0.0.1:35176]
W0319 16:15:12.439264  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439313  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439340  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439352  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439363  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439375  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439385  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439401  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439416  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0319 16:15:12.439427  106300 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0319 16:15:12.439521  106300 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0319 16:15:12.439534  106300 factory.go:412] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0319 16:15:12.439729  106300 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0319 16:15:12.439986  106300 reflector.go:123] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0319 16:15:12.440000  106300 reflector.go:161] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0319 16:15:12.441056  106300 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (719.186µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35176]
I0319 16:15:12.442380  106300 get.go:251] Starting watch for /api/v1/pods, rv=22352 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m49s
I0319 16:15:12.472129  106300 wrap.go:47] GET /healthz: (14.169375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.474224  106300 wrap.go:47] GET /api/v1/namespaces/default: (1.436857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.477738  106300 wrap.go:47] POST /api/v1/namespaces: (2.965179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.480040  106300 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.62966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.487925  106300 wrap.go:47] POST /api/v1/namespaces/default/services: (6.882401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.490598  106300 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.130218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.493796  106300 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.726672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.540306  106300 shared_informer.go:123] caches populated
I0319 16:15:12.540340  106300 controller_utils.go:1034] Caches are synced for scheduler controller
I0319 16:15:12.540724  106300 reflector.go:123] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.540746  106300 reflector.go:161] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.541327  106300 reflector.go:123] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.541353  106300 reflector.go:161] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.541773  106300 reflector.go:123] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.541788  106300 reflector.go:161] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.542621  106300 reflector.go:123] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.542643  106300 reflector.go:161] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.542675  106300 reflector.go:123] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.542700  106300 reflector.go:161] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.543149  106300 reflector.go:123] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.543168  106300 reflector.go:161] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.543169  106300 reflector.go:123] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.543188  106300 reflector.go:161] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.543548  106300 reflector.go:123] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.543564  106300 reflector.go:161] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.544439  106300 reflector.go:123] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.544468  106300 reflector.go:161] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0319 16:15:12.546637  106300 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (828.058µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35524]
I0319 16:15:12.547316  106300 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (550.017µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35510]
I0319 16:15:12.547839  106300 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (423.982µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35512]
I0319 16:15:12.548362  106300 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (420.697µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35514]
I0319 16:15:12.549494  106300 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (468.346µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35480]
I0319 16:15:12.551342  106300 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=22352 labels= fields= timeout=8m16s
I0319 16:15:12.551926  106300 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=22352 labels= fields= timeout=7m11s
I0319 16:15:12.552208  106300 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (508.583µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35522]
I0319 16:15:12.552434  106300 get.go:251] Starting watch for /api/v1/services, rv=22633 labels= fields= timeout=8m15s
I0319 16:15:12.552710  106300 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (393.146µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35518]
I0319 16:15:12.553190  106300 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (381.722µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35520]
I0319 16:15:12.553552  106300 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=22352 labels= fields= timeout=9m17s
I0319 16:15:12.553904  106300 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=22352 labels= fields= timeout=5m50s
I0319 16:15:12.554419  106300 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=22352 labels= fields= timeout=7m38s
I0319 16:15:12.554427  106300 get.go:251] Starting watch for /api/v1/nodes, rv=22352 labels= fields= timeout=6m15s
I0319 16:15:12.554887  106300 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=22352 labels= fields= timeout=5m9s
I0319 16:15:12.566173  106300 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (684.894µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35516]
I0319 16:15:12.567371  106300 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=22352 labels= fields= timeout=5m18s
I0319 16:15:12.640874  106300 shared_informer.go:123] caches populated
I0319 16:15:12.741215  106300 shared_informer.go:123] caches populated
I0319 16:15:12.841477  106300 shared_informer.go:123] caches populated
I0319 16:15:12.941736  106300 shared_informer.go:123] caches populated
I0319 16:15:13.043526  106300 shared_informer.go:123] caches populated
I0319 16:15:13.143772  106300 shared_informer.go:123] caches populated
I0319 16:15:13.245279  106300 shared_informer.go:123] caches populated
I0319 16:15:13.345530  106300 shared_informer.go:123] caches populated
I0319 16:15:13.445816  106300 shared_informer.go:123] caches populated
I0319 16:15:13.546020  106300 shared_informer.go:123] caches populated
I0319 16:15:13.550123  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:13.550708  106300 wrap.go:47] POST /api/v1/nodes: (3.348656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.552315  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:13.552414  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:13.552440  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:13.553542  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:13.554739  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.462332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.555536  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0
I0319 16:15:13.555557  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0
I0319 16:15:13.555704  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0", node "node1"
I0319 16:15:13.555723  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0319 16:15:13.555769  106300 factory.go:733] Attempting to bind rpod-0 to node1
I0319 16:15:13.558271  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0/binding: (1.815194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.558271  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.383145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.558607  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:13.559199  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1
I0319 16:15:13.559219  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1
I0319 16:15:13.559344  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1", node "node1"
I0319 16:15:13.559370  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0319 16:15:13.559418  106300 factory.go:733] Attempting to bind rpod-1 to node1
I0319 16:15:13.562906  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1/binding: (3.087561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.563222  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:13.566308  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (7.364551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.569380  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.308594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.666292  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (3.284496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.769368  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (2.09431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.769880  106300 preemption_test.go:561] Creating the preemptor pod...
I0319 16:15:13.773265  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.045483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.773567  106300 preemption_test.go:567] Creating additional pods...
I0319 16:15:13.773678  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:13.773705  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:13.773846  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.773896  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.777411  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.795575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0319 16:15:13.778384  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/status: (4.074215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.781042  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (3.469576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0319 16:15:13.781299  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (7.317884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35552]
I0319 16:15:13.781715  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.84099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.783262  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.786582  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/status: (2.719206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.786794  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.889326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35556]
I0319 16:15:13.789431  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.129092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0319 16:15:13.792884  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.152093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0319 16:15:13.795699  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (8.635496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.796002  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:13.796055  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:13.796264  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.796340  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.798701  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.663099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0319 16:15:13.798729  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.584731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.800208  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:13.800252  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:13.800398  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.800448  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.800553  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (6.843008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0319 16:15:13.805353  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0/status: (2.784665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35558]
I0319 16:15:13.805904  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.098119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0319 16:15:13.805431  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (4.653694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.806273  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (9.250329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.809243  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.506045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.810018  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (3.100265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0319 16:15:13.810362  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.810548  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:13.810568  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:13.810654  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.810699  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.811636  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.922161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.812122  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (990.504µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.813684  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/preemptor-pod.158d684a0d5eec0c: (8.007763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.814313  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.086453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.815784  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.758542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.815819  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1/status: (4.896055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35562]
I0319 16:15:13.817681  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.402255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.819488  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (3.072449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.819761  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.819967  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:13.819994  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (5.106896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35550]
I0319 16:15:13.820010  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:13.820241  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.820415  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.823354  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.82039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0319 16:15:13.824831  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2/status: (3.929892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.825609  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.171006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35570]
I0319 16:15:13.826847  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.422123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.829686  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.829887  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:13.829916  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:13.830026  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.830138  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.834354  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (3.766234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0319 16:15:13.834940  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.349127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35572]
I0319 16:15:13.838377  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (12.120261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35570]
I0319 16:15:13.841953  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (21.253587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.842577  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3/status: (6.557525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.846121  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.933289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35566]
I0319 16:15:13.846702  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (7.790409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35568]
I0319 16:15:13.847317  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.847793  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:13.847845  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:13.848019  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.848185  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.849694  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.080045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.850056  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.233049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35572]
I0319 16:15:13.866131  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4/status: (15.436886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35574]
I0319 16:15:13.867654  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (17.356542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.868263  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.511202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35574]
I0319 16:15:13.868529  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (18.101426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35572]
I0319 16:15:13.868636  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.870415  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:13.870439  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:13.870586  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.870636  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.874013  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.236341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.874515  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5/status: (3.276464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.878323  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.761802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.878363  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (3.299677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35582]
I0319 16:15:13.879813  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (4.137675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.880707  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.880937  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:13.880956  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:13.881060  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.881161  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.882257  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.093702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35582]
I0319 16:15:13.883845  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.241946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.883848  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6/status: (2.314326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.886169  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.626031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.886527  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.887349  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.664972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35582]
I0319 16:15:13.887496  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:13.887544  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:13.887705  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.887780  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.889605  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (14.747901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35584]
I0319 16:15:13.889732  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.21693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0319 16:15:13.890514  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.633499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.890895  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7/status: (2.866145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.892339  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.978651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35584]
I0319 16:15:13.893650  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (2.238265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.894205  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.894381  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.556662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35584]
I0319 16:15:13.894511  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:13.894534  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:13.894634  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.894677  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.894758  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.210375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.896499  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.431459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0319 16:15:13.897011  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8/status: (1.950308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.898282  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.012623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.898889  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.841395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35588]
I0319 16:15:13.899611  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.175495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.899815  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.900016  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:13.900036  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:13.900173  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.900221  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.901747  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.061863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.903153  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.338873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35590]
I0319 16:15:13.903660  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9/status: (3.206777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.904121  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (3.665512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0319 16:15:13.906278  106300 cacher.go:647] cacher (*core.Pod): 1 objects queued in incoming channel.
I0319 16:15:13.912371  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (7.528193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35564]
I0319 16:15:13.912371  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (7.729339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.912989  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.921805  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:13.921834  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:13.922001  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.922049  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.922145  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (9.095138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.925366  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (2.52044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.926617  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (3.700167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0319 16:15:13.927118  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.927320  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:13.927354  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:13.927513  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.927567  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.927590  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.53865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.930876  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-2.158d684a1024b135: (7.745723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0319 16:15:13.930957  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (3.035452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.931444  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.385181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.931716  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10/status: (3.820096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0319 16:15:13.932894  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.526817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.933310  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.109566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35586]
I0319 16:15:13.933609  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.933915  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:13.933964  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:13.933968  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.003835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35578]
I0319 16:15:13.934165  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.934222  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.938287  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.773873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35596]
I0319 16:15:13.938503  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.536276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0319 16:15:13.938635  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11/status: (2.674382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.940750  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.920558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35596]
I0319 16:15:13.941211  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (2.16874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.941815  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.942567  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:13.942584  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:13.942701  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.942741  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.944512  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.17942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.945160  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.422163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.945862  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12/status: (2.854866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0319 16:15:13.946595  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.108265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35600]
I0319 16:15:13.948113  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.233509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35592]
I0319 16:15:13.949001  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (2.112782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0319 16:15:13.949347  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.949559  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:13.949588  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:13.949713  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.949771  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.949887  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (14.327219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0319 16:15:13.951402  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.227321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35600]
I0319 16:15:13.952421  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (2.178764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.952841  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13/status: (2.513236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0319 16:15:13.953726  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.017159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0319 16:15:13.954484  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.613698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35600]
I0319 16:15:13.955205  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.95177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35598]
I0319 16:15:13.955574  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.955868  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:13.955914  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:13.956028  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.956152  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.959033  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (2.638652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.959703  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.274324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0319 16:15:13.961264  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14/status: (4.347761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35606]
I0319 16:15:13.966531  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (4.175807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0319 16:15:13.966856  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.961808  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (5.58357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35594]
I0319 16:15:13.967583  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:13.967614  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:13.967782  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.967860  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.970929  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.344844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35610]
I0319 16:15:13.970932  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.183008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.971389  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.971651  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:13.971671  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:13.971819  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.971914  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.972990  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-5.158d684a13231139: (3.724807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35612]
I0319 16:15:13.973010  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (5.494567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0319 16:15:13.974230  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.568095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35610]
I0319 16:15:13.975137  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.555543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35608]
I0319 16:15:13.975611  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.032699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35612]
I0319 16:15:13.976022  106300 cacher.go:647] cacher (*core.Pod): 2 objects queued in incoming channel.
I0319 16:15:13.980595  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.370056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0319 16:15:13.980807  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15/status: (8.146479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.984917  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.682088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0319 16:15:13.985256  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (3.88754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.985587  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.985950  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:13.985971  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:13.986154  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.986203  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:13.987589  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.132642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0319 16:15:13.987964  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.168891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35610]
I0319 16:15:13.989888  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16/status: (3.068824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35602]
I0319 16:15:13.990781  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.56605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0319 16:15:13.991198  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.665809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35610]
I0319 16:15:13.998954  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (8.546863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0319 16:15:13.999321  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:13.999561  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:13.999574  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:13.999696  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:13.999750  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.003175  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.214867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0319 16:15:14.006495  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17/status: (5.692406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35614]
I0319 16:15:14.007680  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (5.940494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0319 16:15:14.007802  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (6.984372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35604]
I0319 16:15:14.049139  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (40.497841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0319 16:15:14.050651  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (43.687302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35614]
I0319 16:15:14.051117  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.051507  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:14.051528  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:14.051657  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.051710  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.052904  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.127674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0319 16:15:14.054706  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (1.697185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0319 16:15:14.056022  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.341888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0319 16:15:14.056759  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.019079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35616]
I0319 16:15:14.061334  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.737131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0319 16:15:14.069166  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (7.212989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0319 16:15:14.069706  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18/status: (13.118393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35614]
I0319 16:15:14.074255  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (3.042486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0319 16:15:14.074619  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.463144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0319 16:15:14.074649  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.074889  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:14.075234  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:14.075488  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.075584  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.078639  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.825361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0319 16:15:14.078859  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.973839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0319 16:15:14.078861  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.079240  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:14.079260  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:14.079370  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.079410  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.080265  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.309649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0319 16:15:14.082364  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19/status: (2.504654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35620]
I0319 16:15:14.082770  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-7.158d684a14289d74: (4.607184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35622]
I0319 16:15:14.082835  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (2.547384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35626]
I0319 16:15:14.082857  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.600306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35628]
I0319 16:15:14.086357  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.754409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35624]
I0319 16:15:14.086626  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (3.292984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35618]
I0319 16:15:14.086983  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.087222  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:14.087238  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:14.087367  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.087416  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.089222  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.44199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35626]
I0319 16:15:14.089762  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.754654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.090257  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20/status: (2.471177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35624]
I0319 16:15:14.092448  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.659105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.092744  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.092942  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:14.092961  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:14.093060  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.093172  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.095777  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.799997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35634]
I0319 16:15:14.096669  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21/status: (3.166562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.096679  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (2.727446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35626]
I0319 16:15:14.098741  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.621074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.099215  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.099415  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:14.099494  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:14.099600  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.099650  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.102915  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.215161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0319 16:15:14.102931  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.33665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35634]
I0319 16:15:14.103713  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22/status: (3.785202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.106870  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (1.50945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0319 16:15:14.107237  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.107490  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:14.107530  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:14.107651  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.107725  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.110435  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.817329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35638]
I0319 16:15:14.110780  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (2.754407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.111221  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23/status: (3.186961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35636]
I0319 16:15:14.113563  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.722836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.113872  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.114223  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:14.114248  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:14.114381  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.114434  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.117531  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24/status: (2.761358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.117720  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.958662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35638]
I0319 16:15:14.119041  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (2.752195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0319 16:15:14.120776  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (2.197099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35638]
I0319 16:15:14.121130  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.121368  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:14.121428  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:14.121567  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.121616  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.125469  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.012588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35642]
I0319 16:15:14.125541  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (2.541769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.125541  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25/status: (3.507343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0319 16:15:14.127478  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.546153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35640]
I0319 16:15:14.127812  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.128142  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:14.128193  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:14.128324  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.128372  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.130533  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.527701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.131214  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.142182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35644]
I0319 16:15:14.131580  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26/status: (2.531578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35642]
I0319 16:15:14.133545  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.50937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35644]
I0319 16:15:14.133858  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.134167  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:14.134189  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:14.134295  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.134345  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.136729  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.4635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.137178  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (2.548096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35644]
I0319 16:15:14.137387  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.137639  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:14.137690  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:14.137825  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.137890  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.139594  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.527573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35644]
I0319 16:15:14.139629  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-10.158d684a1687c17d: (3.33674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.140308  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27/status: (2.10368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.141565  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.575374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35644]
I0319 16:15:14.146985  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (5.622553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35632]
I0319 16:15:14.147407  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.147676  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:14.147691  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:14.147804  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.147850  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.151666  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.962166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35648]
I0319 16:15:14.152562  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (2.861035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.152567  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28/status: (2.357948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35644]
I0319 16:15:14.154735  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.424778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.154977  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.155225  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:14.155240  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:14.155357  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.155411  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.157049  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.245013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35648]
I0319 16:15:14.157121  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.479023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.157444  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.157743  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:14.157755  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:14.157836  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.157886  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.159106  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-11.158d684a16ed4e27: (2.778092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0319 16:15:14.159726  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.594227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.160992  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.444025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0319 16:15:14.161759  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29/status: (3.003329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35648]
I0319 16:15:14.166581  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (2.986576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0319 16:15:14.166872  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.167060  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:14.167125  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:14.167258  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.167313  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.169735  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.031595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.169938  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30/status: (2.286504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0319 16:15:14.171111  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (3.076624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35652]
I0319 16:15:14.172178  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.632838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35650]
I0319 16:15:14.172596  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.172815  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:14.172838  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:14.173002  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.173131  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.175661  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.850897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0319 16:15:14.176295  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (2.876674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.176316  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31/status: (2.888338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35652]
I0319 16:15:14.178352  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.37406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.178621  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.178797  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:14.178833  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:14.178999  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.179053  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.181313  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.644696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0319 16:15:14.183687  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32/status: (4.074942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.185913  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.078301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0319 16:15:14.186374  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (6.131126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0319 16:15:14.188057  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (2.46448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35646]
I0319 16:15:14.188621  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.188811  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:14.188859  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:14.189001  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.189054  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.192840  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33/status: (2.84249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0319 16:15:14.193605  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.63211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0319 16:15:14.194328  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (2.931801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0319 16:15:14.197918  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (3.454609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35654]
I0319 16:15:14.198543  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.198838  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:14.198854  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:14.198967  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.199034  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.201445  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.406481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0319 16:15:14.206548  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (6.290563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0319 16:15:14.206895  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34/status: (6.731546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35662]
I0319 16:15:14.209309  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.739422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0319 16:15:14.209615  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.209981  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:14.210007  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:14.210253  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.210307  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.215950  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.715671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.216105  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (3.253368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35656]
I0319 16:15:14.216200  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35/status: (4.483618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0319 16:15:14.218646  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.773479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0319 16:15:14.218954  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.219340  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:14.219362  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:14.219480  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.219533  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.221441  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.390092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.229280  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (8.918635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35668]
I0319 16:15:14.229421  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36/status: (9.060139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0319 16:15:14.232357  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.893422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35664]
I0319 16:15:14.232786  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.232995  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:14.233007  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:14.233375  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.233430  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.235910  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (2.113775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.237407  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.087786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35670]
I0319 16:15:14.242213  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37/status: (6.46706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35668]
I0319 16:15:14.245324  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.77933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35670]
I0319 16:15:14.245717  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.245975  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:14.246022  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:14.246302  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.246358  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.251000  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (3.664631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.251696  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.270925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0319 16:15:14.252843  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38/status: (6.039462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35670]
I0319 16:15:14.255424  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.845617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0319 16:15:14.255771  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.256151  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:14.256174  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:14.256288  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.256341  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.258971  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (2.264786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.259846  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.568706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0319 16:15:14.260020  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39/status: (2.855325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35672]
I0319 16:15:14.264671  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (3.677707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0319 16:15:14.266197  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.266403  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:14.266472  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:14.266627  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.266682  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.269139  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.809996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.269952  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40/status: (2.436208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35674]
I0319 16:15:14.270644  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.214319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0319 16:15:14.272905  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.704058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0319 16:15:14.273473  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.273760  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:14.273808  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:14.273930  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.273980  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.276233  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.837301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.277438  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41/status: (3.123753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0319 16:15:14.278950  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.423088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35678]
I0319 16:15:14.281564  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (2.58516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35676]
I0319 16:15:14.283810  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.284151  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:14.284190  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:14.284532  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.285322  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.289852  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (3.47442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35678]
I0319 16:15:14.289872  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (3.558543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.291355  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.291613  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:14.291670  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:14.291916  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.292003  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.293371  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.065309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I0319 16:15:14.293392  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-17.158d684a1ad52c32: (6.454721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35680]
I0319 16:15:14.293959  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.682877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35678]
I0319 16:15:14.295384  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42/status: (2.923071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35666]
I0319 16:15:14.295806  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.803743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35680]
I0319 16:15:14.297724  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.73247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35678]
I0319 16:15:14.298199  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.298542  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:14.298564  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:14.298669  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.298721  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.314800  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (14.602976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0319 16:15:14.317694  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (18.150558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I0319 16:15:14.318432  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43/status: (18.848331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35680]
I0319 16:15:14.320811  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (1.795573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I0319 16:15:14.321216  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.321494  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:14.321517  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:14.321635  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.321675  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.324422  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.712463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0319 16:15:14.327286  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (5.310642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0319 16:15:14.327544  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44/status: (5.343483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35682]
I0319 16:15:14.332270  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (4.241768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0319 16:15:14.332713  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.332947  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:14.332970  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:14.333116  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.333167  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.337753  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.600819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.337871  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (3.133215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0319 16:15:14.337996  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45/status: (3.233023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35688]
I0319 16:15:14.349513  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (3.693219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0319 16:15:14.349893  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.350193  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:14.350237  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:14.350376  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.350440  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.353005  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.557993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.356724  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.905472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35694]
I0319 16:15:14.357549  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46/status: (6.707308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35690]
I0319 16:15:14.360107  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.888955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35694]
I0319 16:15:14.360446  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.360634  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:14.360652  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:14.360740  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.360795  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.364774  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (3.140823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.365645  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.098022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.366321  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47/status: (4.264706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35694]
I0319 16:15:14.369536  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.988628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.369820  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.370008  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:14.370026  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:14.370224  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.370390  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.373033  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (2.379096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.373803  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.55418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.394742  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48/status: (20.336905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.396339  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.503018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.399252  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (3.757087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.402995  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.403341  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:14.403388  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:14.403549  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.403631  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.408801  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.576789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0319 16:15:14.408880  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49/status: (3.734575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.409365  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (4.771419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.410881  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.318966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35696]
I0319 16:15:14.411249  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.411549  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:14.411577  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:14.411692  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.411748  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.414784  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (2.733352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0319 16:15:14.415534  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (3.099885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.415629  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-21.158d684a2066aa8a: (2.977894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35700]
I0319 16:15:14.415765  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.416034  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:14.416050  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:14.416221  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.416261  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.418820  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (2.177923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0319 16:15:14.419390  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (2.577146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.419682  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.419849  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:14.419868  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:14.419976  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.420033  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.420745  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-23.158d684a2144b9a7: (3.285348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0319 16:15:14.423509  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (2.722753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0319 16:15:14.423975  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (3.654513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.424661  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.430343  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-24.158d684a21ab1f5e: (7.391963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35702]
I0319 16:15:14.430858  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:14.430917  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:14.431171  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.431272  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.435760  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (3.85621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.436304  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (3.987923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35698]
I0319 16:15:14.438474  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-28.158d684a23a8fc2c: (4.845605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0319 16:15:14.439233  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.439491  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:14.439549  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:14.439734  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.439825  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.456836  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (16.430397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0319 16:15:14.457614  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (3.958873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.457986  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.458280  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:14.458359  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:14.458593  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.458691  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.460661  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.680223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.461364  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.617595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0319 16:15:14.461670  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.465025  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-30.158d684a24d1ff37: (11.349612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35706]
I0319 16:15:14.466407  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:14.466474  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:14.466658  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.466737  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.468900  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.87882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0319 16:15:14.469223  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.469910  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:14.469925  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:14.470024  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.470112  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.473934  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-31.158d684a252aa140: (6.918135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.475648  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (4.957117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0319 16:15:14.475660  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (4.30095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.476026  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (4.350593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35704]
I0319 16:15:14.476372  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.478384  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:14.478438  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:14.478591  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.478676  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.480316  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-33.158d684a261dbcf1: (5.665751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.481552  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.991334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0319 16:15:14.485666  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (6.420358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.487501  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.487641  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-34.158d684a26b60610: (5.952637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.487717  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:14.487728  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:14.487845  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.487885  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.491186  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (2.594257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.491735  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (3.515855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0319 16:15:14.493984  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-35.158d684a27620442: (3.334422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35712]
I0319 16:15:14.495139  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.496538  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.898831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.497204  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:14.497262  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:14.498757  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:14.499010  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:14.499265  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-39.158d684a2a207282: (3.960096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0319 16:15:14.502739  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (3.281236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:14.504551  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (4.807049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.505390  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:14.511476  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-43.158d684a2ca71a0c: (9.692577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35708]
I0319 16:15:14.550329  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:14.552589  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:14.553136  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:14.552535  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:14.553788  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:14.589398  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.98512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.689914  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.560912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.790903  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (3.555069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.889641  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.105657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:14.990413  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.960662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.091216  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.145379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.190119  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.546199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.291339  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (3.887866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.389553  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.939231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.441265  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:15.441307  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:15.441544  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod", node "node1"
I0319 16:15:15.441567  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0319 16:15:15.441614  106300 factory.go:733] Attempting to bind preemptor-pod to node1
I0319 16:15:15.441932  106300 cache.go:643] Couldn't expire cache for pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod. Binding is still in progress.
I0319 16:15:15.442036  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:15.442053  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:15.442231  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.442288  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.446277  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/binding: (2.597409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.446590  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:15.447041  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (3.889687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:15.447195  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (3.353151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36102]
I0319 16:15:15.447336  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.447639  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.447780  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:15.447800  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:15.447895  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.447935  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.449002  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-0.158d684a0ef400a9: (5.77659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.450893  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (2.260387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:15.451323  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.451447  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.775399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.451758  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:15.451774  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:15.451953  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.451996  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.459123  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (6.230036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.459160  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (6.924438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:15.459446  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.459696  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.459745  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:15.459755  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:15.459831  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.459859  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (11.540561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.459886  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.460881  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.464537  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (4.170568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.464537  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (4.181052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35692]
I0319 16:15:15.464830  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.464906  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.465690  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:15.465710  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:15.465836  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.465896  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.468017  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.68728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.468018  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.582744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.468343  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.468347  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.468558  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:15.468573  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:15.468656  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.468692  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.470677  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.813801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.470747  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.828993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.470969  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.470989  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.471564  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:15.471590  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:15.471703  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.471835  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.474649  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.737426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.474660  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.670366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.474928  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.474998  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.475336  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:15.475358  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:15.475450  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.475510  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.477446  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.46324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.477716  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.478252  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (2.318532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.478374  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:15.478397  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:15.478541  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.478581  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.478655  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.480248  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.424646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.480348  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.60237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.480537  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.480635  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.480878  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:15.480893  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:15.480968  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.480976  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-1.158d684a0f90808d: (28.652146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.481015  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.482713  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.429834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.483165  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.924533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.483392  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.483519  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.483574  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:15.483598  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:15.483764  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.483826  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.484772  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-3.158d684a10b90c05: (3.068763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.487328  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (2.79143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.487708  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (3.177901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.487886  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.488144  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-4.158d684a11cabeee: (2.763974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.488382  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.488756  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:15.488778  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:15.488880  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.488919  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.490564  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.558998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.491680  106300 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0319 16:15:15.491882  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.9743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.492336  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.492342  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.569751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36108]
I0319 16:15:15.492663  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:15.492680  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:15.492763  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.492788  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.492839  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.493830  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.961373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.494128  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-6.158d684a13c3ac26: (5.323068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.494970  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.565726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.495331  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.495547  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.323122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.495756  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (2.038355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.495981  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.496168  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:15.496244  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:15.496375  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.496411  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.496882  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (965.843µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.498607  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.863261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.499156  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.883717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.499640  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (2.893001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.499990  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.500269  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:15.500303  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:15.500399  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.500486  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.501375  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.501659  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-8.158d684a1491ed66: (6.975686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35710]
I0319 16:15:15.503929  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (1.717271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36104]
I0319 16:15:15.504385  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.596082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.504657  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.504860  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.505371  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:15.505391  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:15.505542  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.505600  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.506262  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (4.511557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.506636  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-9.158d684a14e68402: (3.25328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36112]
I0319 16:15:15.508506  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.576649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.508564  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (2.367624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.508844  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.508870  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (2.287973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36114]
I0319 16:15:15.509128  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.509231  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:15.509245  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:15.509334  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.509379  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.510611  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.692952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.511902  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-2.158d684a1024b135: (3.859768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36112]
I0319 16:15:15.513425  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (3.476339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36114]
I0319 16:15:15.513742  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (2.15289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.514001  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.514351  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:15.514384  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:15.514533  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.514596  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.515832  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.666517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36114]
I0319 16:15:15.516243  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-12.158d684a176f520e: (3.669101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36112]
I0319 16:15:15.518013  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.225491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.518161  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (2.452265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.518357  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.518386  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.518902  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:15.518927  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:15.519020  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.519107  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.519386  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (8.74371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.519668  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.522303  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.706572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.522677  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.523330  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (6.163389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36114]
I0319 16:15:15.523621  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (3.634225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36110]
I0319 16:15:15.524710  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-13.158d684a17da6901: (7.508476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36112]
I0319 16:15:15.526559  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.659429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.527272  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.528291  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-14.158d684a183be348: (2.76541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36112]
I0319 16:15:15.534676  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-5.158d684a13231139: (5.720905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.540245  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-15.158d684a192c6d5e: (4.897567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.542180  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:15.542215  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:15.542347  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.542399  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.550539  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:15.552750  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:15.553290  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:15.553952  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:15.556611  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:15.565747  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-16.158d684a1a067b8d: (24.678796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.565822  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (2.952693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36118]
I0319 16:15:15.566367  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (3.659275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.566581  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.566631  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.566746  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (3.879932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36120]
I0319 16:15:15.566905  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:15.566918  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:15.567040  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.567125  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.569170  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.738293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36118]
I0319 16:15:15.569174  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.663417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.569515  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.569586  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.558835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36120]
I0319 16:15:15.569622  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.569870  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:15.569885  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:15.569962  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.570003  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.570988  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-18.158d684a1dee0179: (4.456198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.572427  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.819058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.572434  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.877834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36118]
I0319 16:15:15.572729  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.572842  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (2.229584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36106]
I0319 16:15:15.572880  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:15.572896  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:15.572991  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.573007  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.573043  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.575220  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.526028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.575494  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.575673  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-7.158d684a14289d74: (3.150272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.575911  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.579751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.576288  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (2.685643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36118]
I0319 16:15:15.576437  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.576507  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:15.576523  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:15.576619  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.576658  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.578325  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.467297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.578796  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.578815  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.959905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.579094  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:15.579131  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:15.579222  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.579264  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.580553  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.105721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.580760  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.580926  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.39816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.580937  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:15.580966  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.046301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36128]
I0319 16:15:15.580968  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:15.581057  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.581140  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.581825  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.582475  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-19.158d684a1f94b449: (5.749638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36116]
I0319 16:15:15.582563  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (5.377252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36126]
I0319 16:15:15.582794  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.583426  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.026717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36130]
I0319 16:15:15.583474  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.794728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.583732  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.583934  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.584322  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (2.258947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.584422  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:15.584449  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:15.584579  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.584648  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.586010  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-20.158d684a200ee111: (2.668904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36126]
I0319 16:15:15.586372  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.573384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.586652  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.586785  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:15.586817  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:15.586924  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.586975  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.589613  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-22.158d684a20c988fb: (2.991569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36126]
I0319 16:15:15.588586  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (3.74137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36128]
I0319 16:15:15.590301  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.590841  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (3.604955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.591514  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.592732  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.347412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36126]
I0319 16:15:15.594039  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.734873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.594493  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.081941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36126]
I0319 16:15:15.594856  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.595039  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:15.595059  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:15.595208  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.595254  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.596882  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.836985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.597884  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.953462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36132]
I0319 16:15:15.598422  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (2.966077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.598506  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.598670  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-25.158d684a2218b9d5: (7.610827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36128]
I0319 16:15:15.598702  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.598902  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:15.598924  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:15.599170  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.599219  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.599585  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.858959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.601502  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.939352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.601755  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.601926  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:15.601947  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:15.602041  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.602137  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.603345  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (3.616366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36132]
I0319 16:15:15.603713  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-26.158d684a227fd126: (3.197853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36134]
I0319 16:15:15.603779  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.604239  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.644978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36124]
I0319 16:15:15.604274  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (3.620045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.604647  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.604834  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:15.604855  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:15.604949  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.604985  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.605433  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (2.778116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.605748  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.606813  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.297111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.607255  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (2.518916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36122]
I0319 16:15:15.607582  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-10.158d684a1687c17d: (2.830849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36134]
I0319 16:15:15.607709  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (2.53457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36132]
I0319 16:15:15.608776  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.609888  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:15.609919  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:15.609998  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.793729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36134]
I0319 16:15:15.610010  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.610052  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.612405  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-27.158d684a23110574: (4.140113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.614622  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (4.301306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36132]
I0319 16:15:15.614964  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.616137  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (5.339215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36134]
I0319 16:15:15.616233  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:15.616792  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:15.616897  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.616955  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.617043  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-11.158d684a16ed4e27: (4.069948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.616648  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (5.766279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36140]
I0319 16:15:15.617938  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.618752  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.620029  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.977993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.620601  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (2.232303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36140]
I0319 16:15:15.620923  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.621308  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-29.158d684a24422ade: (3.2751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36142]
I0319 16:15:15.621435  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (3.822208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36132]
I0319 16:15:15.621725  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.621883  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:15.621900  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:15.621977  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.622016  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.624437  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (2.087636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36134]
I0319 16:15:15.624859  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.627577  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (5.299535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.627891  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.628324  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (3.355708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36134]
I0319 16:15:15.628660  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-32.158d684a25851eb7: (4.671996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.632051  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.675963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.633814  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:15.633866  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:15.634002  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.634051  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.637159  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (2.19304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.637468  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.638278  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:15.638295  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:15.638398  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.638417  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (3.81287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36142]
I0319 16:15:15.638446  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.638870  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.640365  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.408014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.640788  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (2.071904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36142]
I0319 16:15:15.641026  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.641280  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.641476  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:15.641495  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:15.641611  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.641661  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.642533  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-36.158d684a27eec6ea: (6.382257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.643617  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (10.988216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.645828  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (3.765684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36142]
I0319 16:15:15.646308  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.530791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.646782  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.647285  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (5.014207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.648736  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-37.158d684a28c2dd16: (4.331792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.648900  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.963563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36142]
I0319 16:15:15.649712  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.649892  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:15.649919  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:15.650011  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.650154  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.653177  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (2.544998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.653747  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.653948  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.795846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36146]
I0319 16:15:15.654220  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (3.835373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36136]
I0319 16:15:15.655212  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.655846  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-38.158d684a29882543: (5.470765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.656046  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:15.656131  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:15.656280  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.656347  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.658515  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (3.044383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36146]
I0319 16:15:15.662480  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (3.144847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.662861  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.662918  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (4.084164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36148]
I0319 16:15:15.663281  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.663962  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:15.663982  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:15.664170  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.664227  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.664589  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (2.697803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36146]
I0319 16:15:15.666754  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (2.105959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36148]
I0319 16:15:15.667005  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.667221  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:15.667255  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:15.667362  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.667419  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.668408  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (3.459988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36146]
I0319 16:15:15.668959  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (4.198049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.669938  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.671448  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (3.39319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36148]
I0319 16:15:15.671759  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.671989  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (2.642819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36146]
I0319 16:15:15.672055  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:15.672125  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:15.672239  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:15.672284  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:15.672520  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.674047  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (4.641326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36150]
I0319 16:15:15.674125  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.527122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.675685  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:15.676692  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.910003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36150]
I0319 16:15:15.678677  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (2.496909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.682349  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (2.284434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36150]
I0319 16:15:15.685858  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (2.909114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36150]
I0319 16:15:15.687922  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.554541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36150]
I0319 16:15:15.689843  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:15.690392  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.995622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36150]
I0319 16:15:15.692863  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-40.158d684a2abe4187: (32.532505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.696026  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (4.571322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.697303  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-41.158d684a2b2d9191: (3.592189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.701328  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-17.158d684a1ad52c32: (2.641395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.705226  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-42.158d684a2c409a06: (3.215761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.708877  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-44.158d684a2e0561eb: (2.908346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.712775  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (1.918985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.713203  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-45.158d684a2eb4bde9: (3.606933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.715495  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (2.122838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.716981  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-46.158d684a2fbc43f1: (3.182839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.717901  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.312196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.720488  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (2.157189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.723012  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-47.158d684a305a36b0: (4.813086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.723395  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.674976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.726355  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-48.158d684a30eca61a: (2.556839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.726440  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (2.266213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36138]
I0319 16:15:15.729237  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (2.078561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.729522  106300 preemption_test.go:598] Cleaning up all pods...
I0319 16:15:15.731217  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-49.158d684a32e7e6f2: (3.912502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.734494  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:15.734539  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:15.736706  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-21.158d684a2066aa8a: (4.42887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.736890  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (7.077355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.741426  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-23.158d684a2144b9a7: (3.186749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.742708  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:15.742777  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:15.743674  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (5.006014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.744973  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-24.158d684a21ab1f5e: (2.977541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.747663  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:15.747697  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:15.749599  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-28.158d684a23a8fc2c: (3.957357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.750472  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (6.456011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.753376  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.34757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.754011  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:15.754051  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:15.755813  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.621761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.757593  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (6.747082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.758500  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.649976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.761296  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.025247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.761801  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:15.761960  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:15.764037  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.652761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.764497  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (5.770106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.768036  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:15.768124  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:15.770675  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.882899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.771153  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (6.100035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.774395  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:15.774505  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:15.776217  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (4.699723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.777238  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.783496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.780790  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:15.780858  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:15.783441  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (5.733625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.784324  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.043175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.787864  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:15.788166  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:15.789772  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (5.126854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.790845  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.269423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.793355  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:15.793395  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:15.794854  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (4.508405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.795783  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.080281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.799054  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:15.799152  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:15.801346  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.847242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.802779  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (6.85712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.806766  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:15.806818  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:15.807980  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (4.719727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.809551  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.934889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.812271  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:15.812314  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:15.813696  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (5.248324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.814163  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.515217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.819388  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:15.819878  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:15.819535  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (5.363352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.824494  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:15.824581  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:15.825448  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (4.889526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.826004  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (5.432027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.829811  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:15.829854  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:15.831582  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (5.60948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.834274  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (5.88854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.835967  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:15.836045  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:15.837142  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.963788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.838791  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (5.966514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.839532  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.919348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.843170  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:15.843217  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:15.845234  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.711216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.845925  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (6.75654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.852197  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (5.812169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.856239  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:15.856292  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:15.859472  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (6.615631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.859936  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.186153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.863387  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:15.863496  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:15.866806  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (7.030082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.873580  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (9.753866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.881418  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:15.881495  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:15.884758  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.744871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.889669  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (21.123077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.895240  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:15.895326  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:15.898938  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (8.052713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.901702  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (5.94199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.906847  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:15.906929  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:15.908868  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (7.745003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.912374  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.967142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.916177  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:15.916697  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:15.920178  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.22506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.921353  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (11.404271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.927516  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:15.927760  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:15.930422  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.166871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.931640  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (9.832918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.937983  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:15.938800  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:15.939700  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (7.436744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.943111  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.81165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.945317  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:15.945429  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:15.948497  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.572475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.949227  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (8.940931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.955526  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:15.955794  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:15.958572  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.419992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.958958  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (8.959872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.963308  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:15.964000  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:15.964561  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (4.9703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.967906  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.562784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.968324  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:15.968401  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:15.970299  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (5.377385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.970590  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.89274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.975527  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:15.975619  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:15.978691  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.654032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.980917  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (9.691291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.986350  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:15.986792  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:15.989690  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (8.262131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.991241  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.886753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:15.994307  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:15.994791  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:15.996702  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (6.077127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:15.997211  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.08865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.002261  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:16.002366  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:16.003019  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (5.692224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.005672  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.882528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.009502  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:16.009643  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:16.010629  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (5.208257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.012038  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.038859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.015222  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:16.015293  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:16.017359  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (5.618244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.018114  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.141277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.021492  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:16.021561  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:16.022797  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (5.087056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.025177  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.916598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.026316  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:16.026375  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:16.027789  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (4.610814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.031023  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.692792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.031702  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:16.031750  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:16.033949  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (5.807156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.034443  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.165538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.037884  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:16.037941  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:16.040225  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (5.858156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.040307  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.878706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.043892  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:16.045582  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:16.045490  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (4.830199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.047815  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.855763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.049956  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:16.050009  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:16.051765  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (5.523901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.052738  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.33979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.055785  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:16.055829  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:16.060396  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (8.268847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.061681  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (5.14783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.066749  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:16.066794  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:16.147176  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (79.991625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.162580  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (101.376588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.172795  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:16.172855  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:16.175950  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.409829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.177840  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (14.863266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.183412  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:16.183496  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:16.192007  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (8.205693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.192264  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (13.700162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.213186  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:16.213257  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:16.215392  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (22.696184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.215971  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.096761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.219384  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:16.219438  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:16.222486  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.495488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.239645  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (21.799331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.264381  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:16.264437  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:16.267743  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.752255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.271659  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (30.408364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.319978  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (47.658243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.323357  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (2.004496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.331309  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (7.39616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.334194  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.241817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.336996  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.291725ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.339797  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.167096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.342518  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.166549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.345059  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (938.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.347828  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.179141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.350502  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.154073ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.353243  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.09416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.355636  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (971.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.358013  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (912.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.360564  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (887.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.363217  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (946.791µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.365630  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (951.85µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.368162  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.056182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.370723  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (999.348µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.373309  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.087987ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.375799  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (998.814µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.378338  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.048883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.380783  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (981.358µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.383431  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.046095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.387803  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (2.865259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.391255  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.904647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.393646  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (919.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.396510  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.184555ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.403574  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.165716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.407127  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (2.081387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.409959  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.212033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.413087  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.500296ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.417260  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (2.089911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.420448  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.114501ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.425136  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (2.369732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.427783  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.059791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.430362  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.074284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.432995  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.05589ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.435567  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.017124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.438399  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.339573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.440880  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (987.21µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.443431  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.096506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.446007  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.001158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.448443  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (929.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.455341  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (5.220939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.459187  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.599848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.461895  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.206952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.484916  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (8.835736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.491580  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (5.01406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.498278  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (5.014695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.503683  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (2.416476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.508634  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (3.208444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.518225  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (6.596238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.530865  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (7.767256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.534175  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (1.494558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.536971  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (1.137133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.540093  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.488514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.543188  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.485726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.543882  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0
I0319 16:15:16.543905  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0
I0319 16:15:16.544044  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0", node "node1"
I0319 16:15:16.544075  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0319 16:15:16.544179  106300 factory.go:733] Attempting to bind rpod-0 to node1
I0319 16:15:16.547100  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0/binding: (2.635659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.547162  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.177696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.547793  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:16.548054  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1
I0319 16:15:16.548079  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1
I0319 16:15:16.548190  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1", node "node1"
I0319 16:15:16.548204  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0319 16:15:16.548247  106300 factory.go:733] Attempting to bind rpod-1 to node1
I0319 16:15:16.550275  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.026672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.550784  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:16.551008  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1/binding: (2.565311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.551233  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:16.552935  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:16.553118  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.55236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.553827  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:16.554110  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:16.556777  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:16.650692  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (2.598796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.753876  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (2.160919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.754541  106300 preemption_test.go:561] Creating the preemptor pod...
I0319 16:15:16.757127  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.327496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.757381  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:16.757401  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:16.757541  106300 preemption_test.go:567] Creating additional pods...
I0319 16:15:16.757538  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.757601  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.759884  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.621184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36312]
I0319 16:15:16.759909  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.719311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.761386  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/status: (3.264307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36152]
I0319 16:15:16.764659  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.113456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.764933  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.768058  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/status: (2.738554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.773292  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (4.762475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.773553  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:16.773572  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:16.773702  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.773747  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.776172  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.27ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.777002  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (16.04647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.777885  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (3.483279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.778052  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0/status: (3.481978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36312]
I0319 16:15:16.780709  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.469505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.781313  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.528598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.782051  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (3.484633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.782561  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.782737  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:16.782749  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:16.782843  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.782884  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.784809  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.31421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.785059  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (2.004315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.785329  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.785520  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:16.785536  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:16.785609  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.785645  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.788088  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (2.22767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.788506  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1/status: (2.415148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.788724  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.490887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.791782  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-0.158d684ac02cffaa: (7.562108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36316]
I0319 16:15:16.791944  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.887589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.792343  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (4.123312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36144]
I0319 16:15:16.792741  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (3.91186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.793053  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.793224  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:16.793256  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:16.793342  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.793399  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.794616  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.387066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36316]
I0319 16:15:16.795351  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.604844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.795828  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2/status: (2.082823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.795900  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.774989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.798040  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.681896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36310]
I0319 16:15:16.798163  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.572333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.798410  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.798746  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:16.798761  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:16.798850  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.798891  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.802019  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3/status: (2.118834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36320]
I0319 16:15:16.802447  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (2.861678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.802672  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.958538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36316]
I0319 16:15:16.802901  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.98103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36314]
I0319 16:15:16.804164  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.561139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36320]
I0319 16:15:16.804448  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.804651  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:16.804691  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:16.804800  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.804870  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.805614  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.378791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36316]
I0319 16:15:16.805847  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.487629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.808883  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (2.884353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36316]
I0319 16:15:16.808971  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4/status: (3.774351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36320]
I0319 16:15:16.808975  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.317437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.810103  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.706542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36322]
I0319 16:15:16.811095  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.337711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.811310  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.812277  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.692317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36320]
I0319 16:15:16.814408  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.645824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.814863  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:16.814883  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:16.815014  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.815076  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.818401  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.5383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.818409  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5/status: (3.048528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36322]
I0319 16:15:16.818447  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.102975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36326]
I0319 16:15:16.818875  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.460511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36324]
I0319 16:15:16.820356  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.093322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36322]
I0319 16:15:16.820629  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.820817  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.770424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36326]
I0319 16:15:16.820827  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:16.820843  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:16.820919  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.820973  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.822874  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.687237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36322]
I0319 16:15:16.823141  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.585984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36328]
I0319 16:15:16.823821  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.263727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36330]
I0319 16:15:16.824373  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6/status: (3.117492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.827316  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.005427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36328]
I0319 16:15:16.827594  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (2.749401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.827914  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.828150  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:16.828185  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:16.828294  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.828354  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.831149  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.878681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36332]
I0319 16:15:16.831643  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (2.741293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36322]
I0319 16:15:16.832059  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.013894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36328]
I0319 16:15:16.833126  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7/status: (2.719546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.835309  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.211048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.835605  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.836200  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.601909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36322]
I0319 16:15:16.836289  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:16.836300  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:16.836493  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.836552  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.839620  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8/status: (2.027617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.840023  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.214281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36334]
I0319 16:15:16.842029  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.66136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36334]
I0319 16:15:16.842302  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.842787  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:16.842806  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:16.842936  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.842985  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.843356  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (6.045935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36332]
I0319 16:15:16.845327  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.920635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36338]
I0319 16:15:16.846412  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (6.116501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.846644  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.555488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36332]
I0319 16:15:16.847090  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9/status: (3.637289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36334]
I0319 16:15:16.848928  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.207354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36334]
I0319 16:15:16.849172  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.849357  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:16.849371  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:16.849498  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.849553  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.849867  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.946199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.849875  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.427537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36332]
I0319 16:15:16.852015  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.499106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.852240  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (2.351463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36338]
I0319 16:15:16.852585  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10/status: (2.62045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36334]
I0319 16:15:16.852713  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.444659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36332]
I0319 16:15:16.854540  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.656447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36334]
I0319 16:15:16.854827  106300 cacher.go:647] cacher (*core.Pod): 3 objects queued in incoming channel.
I0319 16:15:16.854845  106300 cacher.go:647] cacher (*core.Pod): 4 objects queued in incoming channel.
I0319 16:15:16.854855  106300 cacher.go:647] cacher (*core.Pod): 5 objects queued in incoming channel.
I0319 16:15:16.854918  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.855237  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:16.855253  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:16.855389  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.130413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36338]
I0319 16:15:16.856563  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.856607  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.857507  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.687086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36336]
I0319 16:15:16.859419  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.596887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36336]
I0319 16:15:16.860117  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11/status: (3.215036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.860414  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.293485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36340]
I0319 16:15:16.861185  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.966061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36342]
I0319 16:15:16.862197  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.736898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.862389  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.863034  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.224205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36340]
I0319 16:15:16.863448  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:16.863478  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:16.863579  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.863620  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.865365  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.920128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.865781  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.492244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36344]
I0319 16:15:16.866231  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12/status: (1.938475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36336]
I0319 16:15:16.867357  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.371215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.869996  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (2.868539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36336]
I0319 16:15:16.870251  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.870332  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.265891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36346]
I0319 16:15:16.870729  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:16.870745  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:16.870856  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.870895  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.872374  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.547818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.873216  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.291393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36350]
I0319 16:15:16.875854  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13/status: (4.719863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36344]
I0319 16:15:16.875869  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (3.407329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36348]
I0319 16:15:16.878538  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.237483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36348]
I0319 16:15:16.878767  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.879029  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:16.879051  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:16.879173  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.879218  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.879773  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (6.834931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.881822  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.693511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36350]
I0319 16:15:16.882871  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.329887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36318]
I0319 16:15:16.883187  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14/status: (2.691534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36348]
I0319 16:15:16.884789  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.492163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.886094  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (2.439925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36352]
I0319 16:15:16.886388  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.886711  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:16.886729  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:16.886824  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.886883  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.887769  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.080288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.889340  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.447575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36350]
I0319 16:15:16.890745  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.705596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.891509  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15/status: (3.270797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36352]
I0319 16:15:16.892567  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.319309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36358]
I0319 16:15:16.893541  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.518443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.893827  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.894167  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:16.894209  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:16.894342  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.894443  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.895928  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.806794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36356]
I0319 16:15:16.896695  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (2.014828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.897553  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16/status: (2.032528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36350]
I0319 16:15:16.899208  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.00928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.899772  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.801309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36350]
I0319 16:15:16.900012  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.900172  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:16.900189  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:16.900290  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.900351  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.956819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36356]
I0319 16:15:16.900350  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.902516  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.630239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.902901  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (2.253403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.904028  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.847813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36362]
I0319 16:15:16.905695  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17/status: (3.522549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36364]
I0319 16:15:16.907553  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.527735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.908092  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (2.003865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36364]
I0319 16:15:16.908313  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.908590  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:16.908619  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:16.908728  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.908966  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.910346  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.278053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.911182  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.479825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36366]
I0319 16:15:16.912185  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.499759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.912387  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18/status: (2.430292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36364]
I0319 16:15:16.914106  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.787945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.915269  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.399534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.915652  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.915806  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:16.915821  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:16.915909  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.915951  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.917434  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.72472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.917927  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.57225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36366]
I0319 16:15:16.920046  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19/status: (3.344566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.920693  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.582025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36368]
I0319 16:15:16.921442  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.000262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36354]
I0319 16:15:16.921769  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.922131  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:16.922152  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:16.922261  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.922305  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.922706  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.339524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36366]
I0319 16:15:16.924579  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.480356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36370]
I0319 16:15:16.925804  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.293161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36366]
I0319 16:15:16.926001  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (2.932513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36360]
I0319 16:15:16.928676  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.75576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36366]
I0319 16:15:16.928871  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20/status: (5.499398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36368]
I0319 16:15:16.930720  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.288873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36368]
I0319 16:15:16.930960  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.931270  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:16.931358  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:16.931282  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.844069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36370]
I0319 16:15:16.931497  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.931543  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.933958  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.469983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36372]
I0319 16:15:16.934283  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.896462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.934609  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21/status: (2.848297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36370]
I0319 16:15:16.936486  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.71799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36368]
I0319 16:15:16.936913  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.906768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.937175  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.937378  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:16.937396  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:16.937540  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.937595  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.940252  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22/status: (2.234868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.940636  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (1.742573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36376]
I0319 16:15:16.940960  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.659479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36372]
I0319 16:15:16.942073  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (5.19134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36368]
I0319 16:15:16.942371  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (1.360389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36376]
I0319 16:15:16.942742  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.942910  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:16.942942  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:16.943032  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.943084  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.946334  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.720348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36372]
I0319 16:15:16.946697  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.844304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.947831  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (3.717642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.948984  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.040038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36372]
I0319 16:15:16.948993  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23/status: (1.859172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36376]
I0319 16:15:16.950739  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.18511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.950985  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.951188  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:16.951216  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:16.951334  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.825456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.951346  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.951416  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.952917  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.165398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.954750  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (2.789517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.955019  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.955568  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:16.955590  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:16.955694  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.955742  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.956764  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-11.158d684ac51d65a5: (4.64171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36382]
I0319 16:15:16.957264  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.520474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.958603  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24/status: (2.591495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.960257  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (2.881719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.960435  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.507074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36374]
I0319 16:15:16.960724  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.960919  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:16.960936  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:16.961027  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.961093  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.962432  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.035226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.964236  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25/status: (2.913753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.965704  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.069626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.966161  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.966370  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:16.966393  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:16.966506  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.966691  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.968872  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.868519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.969309  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (10.815059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36382]
I0319 16:15:16.970439  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26/status: (3.522658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.973036  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.749806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.973380  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.973760  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.725534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36382]
I0319 16:15:16.973833  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:16.973951  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:16.974058  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.974135  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.976672  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.318863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36384]
I0319 16:15:16.978351  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.958376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.979098  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27/status: (4.392381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.982483  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.556404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.982572  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (2.894863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.982851  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.983041  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:16.983060  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:16.983184  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.983229  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.986009  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (2.095947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.987116  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28/status: (3.105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.989511  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.464287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36386]
I0319 16:15:16.989772  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.989825  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.415539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36380]
I0319 16:15:16.990094  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:16.990115  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:16.990214  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.990257  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.992947  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (2.01803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:16.993974  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29/status: (3.444932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36386]
I0319 16:15:16.995095  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.138652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36388]
I0319 16:15:16.995982  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.335056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36386]
I0319 16:15:16.996261  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:16.996480  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:16.996509  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:16.996599  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:16.996651  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:16.999637  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (2.369811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.000610  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30/status: (2.987251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36388]
I0319 16:15:17.001219  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.769985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36390]
I0319 16:15:17.002611  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.355653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36388]
I0319 16:15:17.002904  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.003167  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:17.003189  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:17.003325  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.003371  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.005099  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.26007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.005687  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.740254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36394]
I0319 16:15:17.006287  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31/status: (2.385683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36390]
I0319 16:15:17.008204  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.33009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36394]
I0319 16:15:17.008709  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.008896  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:17.008914  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:17.009018  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.009092  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.010791  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.164756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.011138  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.424033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36396]
I0319 16:15:17.012394  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32/status: (2.733248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36394]
I0319 16:15:17.014043  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.251243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36396]
I0319 16:15:17.014394  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.014607  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:17.014625  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:17.014750  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.014800  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.016601  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.549163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.016979  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.577138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36398]
I0319 16:15:17.019039  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33/status: (3.9902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36396]
I0319 16:15:17.020794  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.250296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36398]
I0319 16:15:17.021143  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.021332  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:17.021349  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:17.021467  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.021513  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.023990  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.80464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.024921  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34/status: (2.794092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36398]
I0319 16:15:17.024923  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (2.870456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.026884  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.486875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.027247  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.029124  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:17.029166  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:17.029279  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.033489  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.733387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.034580  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.037512  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.463741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.038359  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35/status: (3.41795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.040311  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.491591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.040598  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.040783  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:17.040799  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:17.040916  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.040968  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.043999  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.325567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.044299  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36/status: (2.976567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.045793  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.835476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.046913  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (2.198112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36378]
I0319 16:15:17.047256  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.047427  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:17.047444  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:17.047572  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.047623  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.049603  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.703237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.051111  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37/status: (3.18695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.052229  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.632846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36408]
I0319 16:15:17.052771  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.168006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.053042  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.053345  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:17.053356  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:17.053434  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.053498  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.071928  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (6.751785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36438]
I0319 16:15:17.072422  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (17.53409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.072733  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.074767  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-18.158d684ac83c37d3: (19.420407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36408]
I0319 16:15:17.075262  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:17.075279  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:17.075568  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.075620  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.078743  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.421561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36440]
I0319 16:15:17.083977  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38/status: (6.999915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36400]
I0319 16:15:17.083989  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (8.033787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36438]
I0319 16:15:17.086562  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.404609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36438]
I0319 16:15:17.087090  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.087744  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:17.087763  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:17.087899  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.087941  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.090778  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.041509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36442]
I0319 16:15:17.091350  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (2.389444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36440]
I0319 16:15:17.091776  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39/status: (3.556089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36438]
I0319 16:15:17.093188  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (3.028571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.093736  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (1.469989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36440]
I0319 16:15:17.094034  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.094259  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:17.094278  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:17.094384  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.094430  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.096019  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.215529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36442]
I0319 16:15:17.097247  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40/status: (2.500198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.099221  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.561816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.099371  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.09869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.099643  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.099835  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:17.099851  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:17.099958  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.100010  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.102008  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.665283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36442]
I0319 16:15:17.103167  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41/status: (2.48109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.104753  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.19162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.104977  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.105152  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:17.105174  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:17.105296  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.105378  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.108268  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.385765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.109094  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (3.299853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36442]
I0319 16:15:17.109525  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42/status: (3.889424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36406]
I0319 16:15:17.112177  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.915307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.112252  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.726973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36442]
I0319 16:15:17.112596  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.112770  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:17.112787  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:17.112881  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.112931  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.114756  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (1.5271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.115029  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.457588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36446]
I0319 16:15:17.116313  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43/status: (3.055383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36442]
I0319 16:15:17.117936  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (1.198799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.118208  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.118403  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:17.118420  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:17.118541  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.118584  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.120106  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (1.305658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.120879  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.677633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.121679  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44/status: (2.831339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36446]
I0319 16:15:17.123571  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (1.424026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.123839  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.124013  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:17.124052  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:17.124203  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.124249  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.125893  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.251994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.126291  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.353327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.127690  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45/status: (3.147699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36444]
I0319 16:15:17.129297  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.125705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.129587  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.129792  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:17.129808  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:17.129911  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.129962  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.131812  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (1.59184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.132404  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.285557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.132675  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.132816  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:17.132835  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:17.132941  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.132985  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.135118  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46/status: (1.882515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.135130  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.109526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.135520  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-22.158d684ac9f12aa0: (4.25349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.136601  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.070932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.136904  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.137089  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:17.137129  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:17.137233  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.137286  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.137317  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.364985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.138596  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.002656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.138948  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.507625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.139203  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.139359  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:17.139374  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:17.139509  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.139549  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.140479  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-23.158d684aca44eb38: (2.493675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.141273  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.230688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.142795  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.851736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.143111  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47/status: (3.101649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36448]
I0319 16:15:17.144735  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.168753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.145024  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.145247  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:17.145264  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:17.145361  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.145446  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.150175  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (4.511865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.152483  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.782304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36454]
I0319 16:15:17.153169  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48/status: (7.459892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.157817  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (3.879318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.158126  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.158321  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:17.158341  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:17.158488  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.158554  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.173033  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49/status: (14.201188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.175910  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.977661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.176542  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.176756  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:17.176777  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:17.176893  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.176947  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (18.033487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.176947  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.178897  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (5.167003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36456]
I0319 16:15:17.179138  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.922041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36452]
I0319 16:15:17.185367  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (4.932778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36456]
I0319 16:15:17.185639  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.832662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36454]
I0319 16:15:17.185964  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.186153  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:17.186174  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:17.186257  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.186316  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.189020  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (2.455301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36456]
I0319 16:15:17.189576  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.189852  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:17.189900  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:17.190032  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (3.375171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.190039  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.190127  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.192293  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.967137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.192764  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.193019  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:17.193074  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:17.193280  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.193315  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-27.158d684acc1eb679: (5.210724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36454]
I0319 16:15:17.193343  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.198259  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (7.906448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36456]
I0319 16:15:17.199760  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (5.315615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36454]
I0319 16:15:17.200116  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.200436  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:17.200498  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:17.200691  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.200775  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.201757  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (8.097886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.202776  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.719944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:17.204025  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.818305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36458]
I0319 16:15:17.204404  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.204578  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:17.204599  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:17.204688  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.204740  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.206204  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-29.158d684acd14b9b5: (6.136115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36456]
I0319 16:15:17.206974  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.217336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:17.207289  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.208474  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.205195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.209298  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-34.158d684acef1a6a3: (2.425515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36456]
I0319 16:15:17.213863  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-36.158d684ad01a81aa: (3.668922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.217388  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-42.158d684ad3f0f23b: (2.771441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.220681  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-49.158d684ad71caf57: (2.653589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.276406  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.876742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.375745  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.107815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.475988  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.453619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.551010  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:17.553159  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:17.553867  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:17.553888  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:17.554008  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:17.554209  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:17.554298  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:17.554303  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:17.556975  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:17.557414  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.801413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.557415  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.776196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:17.557803  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:17.557827  106300 backoff_utils.go:79] Backing off 4s
I0319 16:15:17.558597  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-18.158d684ac83c37d3: (3.344362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36466]
I0319 16:15:17.576055  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.358531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.675442  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.725755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.775875  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.87257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.875393  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.892804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:17.975573  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.020078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:18.075674  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.086561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:18.175551  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.974797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:18.277183  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.891639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:18.375763  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.237264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:18.443077  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:18.443113  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:18.443330  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod", node "node1"
I0319 16:15:18.443350  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0319 16:15:18.443413  106300 factory.go:733] Attempting to bind preemptor-pod to node1
I0319 16:15:18.443494  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:18.443525  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:18.443694  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.443753  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.446351  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.617209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36598]
I0319 16:15:18.446863  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (2.805033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.447198  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.447312  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.447729  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:18.447746  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/binding: (3.933828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36450]
I0319 16:15:18.447753  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:18.447836  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.448717  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.449198  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:18.448633  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-1.158d684ac0e290ad: (3.776431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36600]
I0319 16:15:18.450277  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.261533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36598]
I0319 16:15:18.450883  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.451560  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (2.287102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.452582  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.452726  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:18.452741  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:18.452838  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.452882  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.454203  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-0.158d684ac02cffaa: (2.965724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36598]
I0319 16:15:18.454255  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.207462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.454262  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.115816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36600]
I0319 16:15:18.454520  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.454579  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.454680  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:18.454698  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:18.454792  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.454842  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.456543  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.862986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.456957  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.596036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.456988  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.624724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36598]
I0319 16:15:18.457258  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.457341  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.457488  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:18.457503  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:18.457597  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.457661  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.459299  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.106509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.459555  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-2.158d684ac158dd21: (2.367852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.459657  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.459820  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:18.459839  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:18.459916  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (2.043754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36598]
I0319 16:15:18.459985  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.460048  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.460203  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.461894  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.201229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.462130  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.386756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.462162  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.462411  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.462650  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:18.462670  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:18.462784  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.462824  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.463547  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-3.158d684ac1acb982: (2.670203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36604]
I0319 16:15:18.464569  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.3246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.464891  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.465039  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:18.465057  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:18.465196  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.465235  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.466186  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (2.970757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.466641  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.466715  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-4.158d684ac207e944: (2.576562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36604]
I0319 16:15:18.467149  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.444734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36460]
I0319 16:15:18.467262  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.15881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36606]
I0319 16:15:18.467450  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.467639  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:18.467655  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:18.467785  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.467823  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.467880  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.471223  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (2.216523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.471553  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.472859  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-5.158d684ac2a36add: (4.569797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36604]
I0319 16:15:18.474500  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.188414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36604]
I0319 16:15:18.474715  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.989983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.474911  106300 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0319 16:15:18.475009  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.475204  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:18.475236  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:18.475319  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.475362  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.477630  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (2.109208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36604]
I0319 16:15:18.477905  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.478084  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:18.478114  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:18.478257  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.478304  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.478369  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (3.247478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.494371  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (14.981963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36612]
I0319 16:15:18.494404  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (15.686944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.494404  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (15.869525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36610]
I0319 16:15:18.494404  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (15.888099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36604]
I0319 16:15:18.494727  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.494876  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.494995  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.495174  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:18.495206  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:18.495323  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.495400  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.497265  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (2.20436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.497275  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.635064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36612]
I0319 16:15:18.497626  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.528843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.497657  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.497839  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:18.497855  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:18.497862  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.497959  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.498004  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.498926  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.160503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36612]
I0319 16:15:18.499624  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-6.158d684ac2fdabb6: (25.532486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.499768  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.463633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.500011  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.500226  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:18.500264  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:18.500282  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.921895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36602]
I0319 16:15:18.500413  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.500502  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.500677  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.216758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36612]
I0319 16:15:18.500689  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.502313  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.53136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.502615  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.456586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36612]
I0319 16:15:18.502686  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.503199  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:18.503234  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:18.503442  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.503513  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.503525  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.282687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.503563  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.505041  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-7.158d684ac36e3811: (4.171702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.505635  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.892806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.505669  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.871464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.505860  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.505949  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.505990  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:18.505998  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:18.506118  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.506155  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.508001  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.435616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.508049  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (3.076715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0319 16:15:18.508420  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.861173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.508569  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.508679  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.508927  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:18.508952  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:18.509041  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.509092  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.509849  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.223807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0319 16:15:18.510562  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-8.158d684ac3eb1e2c: (4.546651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.511171  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.378872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.511657  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (2.154214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.511852  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.511992  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:18.512005  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:18.512090  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.510545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0319 16:15:18.512104  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.512146  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.512437  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.513911  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.450114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.514258  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.913109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.514411  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.774131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36618]
I0319 16:15:18.514600  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.514910  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.514987  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:18.515012  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:18.515111  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.515172  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.516244  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.116136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.517375  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.641695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.517630  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.517783  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (2.039013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0319 16:15:18.518016  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.518193  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:18.518229  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:18.518339  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.518404  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.518483  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-9.158d684ac44d855c: (6.638076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.519164  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.41423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.520101  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.434157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0319 16:15:18.520312  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.521240  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.662669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.521420  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (2.022629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36608]
I0319 16:15:18.521439  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:18.521630  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:18.521768  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.521842  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.521844  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.522371  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-10.158d684ac4b1c342: (2.653294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.523224  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.399783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.523773  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.266691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.524075  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.525017  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.275438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.525495  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-12.158d684ac5886046: (2.138786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.526351  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (961.617µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.527541  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (5.408792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0319 16:15:18.527820  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.528163  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.444987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36614]
I0319 16:15:18.528252  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:18.528383  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:18.528567  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.528611  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.530548  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.460079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.530866  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.530938  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.784037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36624]
I0319 16:15:18.531051  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:18.531188  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:18.531080  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.95725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36620]
I0319 16:15:18.531440  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.531528  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-13.158d684ac5f76978: (5.373089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36616]
I0319 16:15:18.531895  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.531961  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.534166  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (1.42537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36624]
I0319 16:15:18.534212  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (2.059242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.534866  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.535338  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:18.535392  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:18.535426  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.929043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36628]
I0319 16:15:18.535562  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.535605  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.535707  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.536588  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-14.158d684ac67669b5: (3.731514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36626]
I0319 16:15:18.537135  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.233185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.537654  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.896781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36628]
I0319 16:15:18.537973  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.538059  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.538210  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:18.538229  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:18.538378  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.538423  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.539433  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.159276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36624]
I0319 16:15:18.539702  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (952.493µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.540372  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.540529  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:18.540567  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:18.540680  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.064751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36628]
I0319 16:15:18.540683  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.540731  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-15.158d684ac6eb33df: (3.468805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36626]
I0319 16:15:18.540772  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.540939  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.541869  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.256687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36624]
I0319 16:15:18.542315  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.30554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36626]
I0319 16:15:18.542577  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.542744  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.806898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36628]
I0319 16:15:18.543326  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.063273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36624]
I0319 16:15:18.543402  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.543577  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:18.543593  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:18.543713  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.543757  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.545073  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-16.158d684ac75e40da: (3.390691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.546339  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (2.045953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36630]
I0319 16:15:18.546713  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (2.527446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36626]
I0319 16:15:18.546925  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (3.110161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36628]
I0319 16:15:18.546713  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.546992  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.547145  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:18.547159  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:18.547263  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.547297  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.549440  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.616037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36632]
I0319 16:15:18.549726  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (2.075899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.549738  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.550779  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.550935  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:18.550951  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:18.551118  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (3.828442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36628]
I0319 16:15:18.551057  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.551190  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.551240  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:18.552418  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-17.158d684ac7b8b438: (5.431461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36630]
I0319 16:15:18.553387  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.843174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.553696  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:18.553875  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.553961  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (2.307717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36632]
I0319 16:15:18.553747  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (2.043053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36634]
I0319 16:15:18.554392  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:18.554440  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:18.554549  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.554590  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.554897  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:18.555245  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.555628  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:18.556880  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.549409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36636]
I0319 16:15:18.557098  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:18.557176  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.557479  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (2.400731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36622]
I0319 16:15:18.557516  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (2.192523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36632]
I0319 16:15:18.557552  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-19.158d684ac8a6e8e1: (4.332249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36630]
I0319 16:15:18.557777  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.557950  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:18.557986  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:18.558115  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.558173  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.559881  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.824255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36636]
I0319 16:15:18.560182  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.625381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36638]
I0319 16:15:18.560475  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.560645  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:18.560665  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:18.560771  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.560915  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.561565  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (2.847065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36640]
I0319 16:15:18.561886  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-20.158d684ac907db8c: (3.657945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36634]
I0319 16:15:18.562239  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.562585  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.468058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36638]
I0319 16:15:18.562758  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (2.361468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36636]
I0319 16:15:18.562874  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.562928  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.510433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36642]
I0319 16:15:18.563045  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:18.563075  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:18.563252  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.563246  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.563303  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.564885  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (1.455548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36640]
I0319 16:15:18.564892  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.402791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36638]
I0319 16:15:18.565157  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.565950  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (1.678621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36644]
I0319 16:15:18.566122  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:18.566140  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:18.566220  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.566260  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.566642  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.566829  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.411126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36638]
I0319 16:15:18.567633  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-21.158d684ac994d136: (5.023961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36634]
I0319 16:15:18.568240  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.750949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36640]
I0319 16:15:18.568598  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.568785  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:18.568849  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:18.569037  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.569107  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.570339  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (2.286938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36634]
I0319 16:15:18.570807  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.420877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36638]
I0319 16:15:18.571202  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (4.596826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36644]
I0319 16:15:18.571329  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.736943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36640]
I0319 16:15:18.571969  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.572040  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.572291  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.405361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36634]
I0319 16:15:18.572842  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.575253  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.162032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36638]
I0319 16:15:18.576004  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-11.158d684ac51d65a5: (3.9487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36646]
I0319 16:15:18.579918  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:18.579941  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:18.580075  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.580131  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.584336  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (3.908064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36644]
I0319 16:15:18.584677  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.584889  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:18.584921  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:18.585109  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.585692  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.586867  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-24.158d684acb060feb: (4.201393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36648]
I0319 16:15:18.587126  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (5.81333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.585276  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (4.364584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36640]
I0319 16:15:18.587540  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.588150  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (2.155633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36644]
I0319 16:15:18.588517  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.588757  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:18.588800  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:18.588903  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (2.452937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36652]
I0319 16:15:18.588991  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.589490  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.590161  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.589398  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.354979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.596203  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (2.556639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36648]
I0319 16:15:18.597248  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (2.434004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.597248  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (3.504294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36644]
I0319 16:15:18.597841  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.597861  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.598124  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:18.598231  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:18.598398  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.598613  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.599008  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-25.158d684acb57ba81: (4.197607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.599666  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.736187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.601370  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.126227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36658]
I0319 16:15:18.601897  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.748324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36654]
I0319 16:15:18.601952  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.602226  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.602435  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:18.602477  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:18.602582  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.602637  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.602834  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.957855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.605015  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-26.158d684acbacf545: (4.000429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.605309  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.807162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.605421  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (2.291211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36654]
I0319 16:15:18.605521  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (2.391771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36658]
I0319 16:15:18.605934  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.605954  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.606162  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:18.606196  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:18.606304  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.606365  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.609884  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (3.005122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36662]
I0319 16:15:18.610219  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (4.17375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.610278  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.610480  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:18.610540  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:18.610678  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.610743  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.611497  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-28.158d684acca97ef2: (4.670133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.611842  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (3.313739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.612665  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.613805  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (2.614555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36662]
I0319 16:15:18.613957  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (2.361728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.615052  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.615412  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (4.453937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.615729  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.615945  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:18.615979  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:18.616120  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.616185  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.616654  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-30.158d684acd764b91: (4.597879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.618390  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (3.486342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.618968  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (2.422421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.619417  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (2.808358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.619673  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.619708  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.619920  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:18.619948  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:18.620043  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.620093  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.620803  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.289913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.621776  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.401346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.621950  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.53969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.622172  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.622237  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.622433  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:18.622491  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:18.623571  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.623624  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.623828  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (2.541571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.624818  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-31.158d684acddcd014: (6.445668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.625822  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.854018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.626030  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (1.161056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.626136  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.626391  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:18.626442  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:18.626587  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.626640  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.628407  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.913919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.628570  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (3.550583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.629119  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-32.158d684ace341d1f: (3.293845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.629834  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.631199  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.795064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36670]
I0319 16:15:18.631362  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (2.363647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.631790  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.632817  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.135787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36670]
I0319 16:15:18.633033  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (5.799624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.633172  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-33.158d684ace8b36f8: (2.546289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36656]
I0319 16:15:18.633379  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.633570  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:18.633587  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:18.633686  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.633734  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.635585  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (2.344884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.635726  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.831171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.636006  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (2.017649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.636014  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.636312  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:18.636340  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.636380  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:18.636589  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.636652  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.637488  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.229686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.637751  106300 preemption_test.go:598] Cleaning up all pods...
I0319 16:15:18.637983  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.163329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.638211  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.638846  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.925141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.639027  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:18.639082  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:18.639207  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:18.639220  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.639356  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:18.641248  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-35.158d684acf6af548: (6.728655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36672]
I0319 16:15:18.641581  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.447971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.642700  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (2.359217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.643157  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:18.643685  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:18.643832  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:18.643871  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:18.644401  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (6.44128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36666]
I0319 16:15:18.646196  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-37.158d684ad0800c3a: (3.618179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36672]
I0319 16:15:18.650193  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-38.158d684ad22b3858: (3.353803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36672]
I0319 16:15:18.650204  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:18.650386  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:18.651230  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (6.441917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.653521  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-39.158d684ad2e73eef: (2.563999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36672]
I0319 16:15:18.655655  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:18.655697  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:18.656674  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (4.947117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.656791  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-40.158d684ad34a3f9a: (2.682675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36672]
I0319 16:15:18.660720  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-41.158d684ad39f5ff5: (3.05586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.660850  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:18.660898  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:18.661381  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (4.261167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.664165  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-43.158d684ad4648ec1: (2.743384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.664742  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:18.664779  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:18.665608  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (3.885437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.667855  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-44.158d684ad4bad850: (2.512216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.669777  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:18.669809  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:18.672110  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (6.106703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.675669  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-45.158d684ad51144d7: (6.665763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.675711  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:18.676335  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:18.677690  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (5.183713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.679978  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-22.158d684ac9f12aa0: (2.591063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.680650  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:18.680685  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:18.683324  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (5.248199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.684186  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-46.158d684ad596923d: (3.365521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.686353  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:18.686560  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:18.687869  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-23.158d684aca44eb38: (2.682926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.689531  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (5.832372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.690946  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-47.158d684ad5fac00c: (2.433653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.694115  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:18.694170  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-48.158d684ad654b091: (2.51459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.694220  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:18.696821  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (6.783436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.698436  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-27.158d684acc1eb679: (2.565478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.700188  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:18.700280  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:18.701403  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (4.245143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.701888  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-29.158d684acd14b9b5: (2.612545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.705019  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-34.158d684acef1a6a3: (2.325251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.705176  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:18.705223  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:18.706838  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (4.915498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.708541  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-36.158d684ad01a81aa: (2.625104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.709673  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:18.709754  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:18.711485  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (4.269046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.712436  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-42.158d684ad3f0f23b: (3.018175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.715640  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:18.715677  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:18.716910  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-49.158d684ad71caf57: (3.209453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.717602  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (5.718845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.719166  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.753812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.720492  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:18.721149  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:18.721584  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.827509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.723140  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (5.184989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.723981  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.525237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.726797  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.247417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.728130  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:18.728169  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:18.730331  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (5.739953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.731103  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.210434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.733220  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.628847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.735554  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:18.735621  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:18.736221  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (5.514654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.737670  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.80967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.739620  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.490673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.739924  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:18.739970  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:18.742417  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.658698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.742502  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (5.567601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.744341  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.296236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.746178  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:18.746216  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:18.746476  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.661455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.747636  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (4.351042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.748992  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.099991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.751055  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.502594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.752383  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:18.752426  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:18.753318  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.570814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.755630  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (7.409542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.755803  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.300934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.757537  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.337642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.759208  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:18.759358  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.401326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.759538  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:18.766446  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (10.049202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.775340  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (15.312091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.800750  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:18.805024  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:18.807796  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (39.6004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.817529  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:18.817628  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:18.821265  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (11.993492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.823439  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.33875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.828135  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.076531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.835599  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (6.899095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.843611  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (21.725083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.952496  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.16851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.954955  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.806595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.961363  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:18.961480  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:18.965603  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (17.078894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.973732  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:18.973825  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:18.975395  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (13.584986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.976613  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (10.59787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.979948  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:18.979986  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:18.980683  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.797571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.983539  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (6.178485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.984141  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.17144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.986887  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:18.987538  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:18.991104  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (6.796744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.992335  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (4.45078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:18.995264  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:18.996100  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:18.998344  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (6.288833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:18.998815  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.391064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.003217  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:19.003281  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:19.005877  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.236648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.006988  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (7.729744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.010324  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:19.010357  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:19.021784  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (11.184161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.024299  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (16.647759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.029428  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:19.031408  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (6.542386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.033284  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.546964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.033757  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:19.035494  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:19.035532  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:19.038708  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (6.968891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.038899  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.900799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.042816  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:19.042851  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:19.045167  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.016282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.046096  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (6.993727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.049796  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:19.049862  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:19.051723  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (5.300461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.051799  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.576703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.055726  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:19.055772  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:19.058181  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (5.975831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.058565  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.528183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.073549  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:19.073599  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:19.075170  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (8.751744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.075496  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.589406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.078949  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:19.079293  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:19.080714  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (5.125256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.081529  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.744146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.084682  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:19.084741  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:19.086003  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (4.778723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.086558  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.364672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.089485  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:19.089548  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:19.090743  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (4.112008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.091173  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.232844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.093671  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:19.093726  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:19.095659  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.566052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.095745  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (4.472752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.099035  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:19.099085  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:19.100626  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.319326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.101285  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (5.186157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.107749  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:19.107804  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:19.111009  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.964269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.112341  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (10.706085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.115654  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:19.115725  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:19.117405  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.306899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.118499  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (5.775897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.125131  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:19.125197  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:19.126992  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.428971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.127395  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (4.86122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.134049  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:19.134435  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:19.136113  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.348901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.137827  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (9.456011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.140368  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:19.140424  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:19.142214  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (3.988735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.142432  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.701384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.144569  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:19.144615  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:19.146261  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (3.777641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.146277  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.383866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.149720  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:19.150134  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:19.152638  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.937009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.152945  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (6.289398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.156020  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:19.156052  106300 scheduler.go:449] Skip schedule deleting pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:19.157326  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (4.017582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.157748  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.399029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.161267  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (3.631347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.162524  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (947.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.166906  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (4.024724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.171798  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.378919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.175629  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (2.058233ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.179274  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.736643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.183520  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (2.325605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.186885  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.594588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.190114  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.477788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.196138  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (2.250837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.199466  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.575128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.203053  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.82135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.206267  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.511802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.209329  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.305557ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.213230  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.654855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.217230  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.802083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.220772  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.781534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.224177  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.580156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.227529  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.642614ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.232501  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (2.666594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.236651  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (2.06673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.240224  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (1.797144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.243778  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.682093ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.247413  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.831934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.251637  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.590174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.255620  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.198612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.259312  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.979488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.262599  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.547603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.265798  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.583267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.269760  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (2.138515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.273255  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.825111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.278849  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (2.020482ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.284646  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.602026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.289056  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (2.524516ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.293369  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.750264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.297566  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.730916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.301591  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.850738ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.304958  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.557465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.308539  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.77109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.311711  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.477006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.315127  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.755167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.318534  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.699256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.321751  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (1.556537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.324966  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.580449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.328631  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.70261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.331748  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.409464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.345398  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (11.545012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.352655  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (1.254241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.356248  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.932512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.359875  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.823137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.363415  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.637716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.370411  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (5.242345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.378249  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.45338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.381203  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (1.151128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.384650  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (1.616182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.388667  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.266183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.392960  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.394617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.393642  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0
I0319 16:15:19.393660  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0
I0319 16:15:19.393818  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0", node "node1"
I0319 16:15:19.393832  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0319 16:15:19.393959  106300 factory.go:733] Attempting to bind rpod-0 to node1
I0319 16:15:19.397758  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (4.0319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.398940  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1
I0319 16:15:19.398959  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1
I0319 16:15:19.399157  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1", node "node1"
I0319 16:15:19.399181  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0319 16:15:19.399272  106300 factory.go:733] Attempting to bind rpod-1 to node1
I0319 16:15:19.400678  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0/binding: (5.875502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.400942  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:19.402215  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1/binding: (2.587643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.402423  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:19.403391  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.150746ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.407072  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.989017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.501184  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-0: (2.549277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.551442  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:19.554020  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:19.555253  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:19.555809  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:19.557257  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:19.604491  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (2.179542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.604869  106300 preemption_test.go:561] Creating the preemptor pod...
I0319 16:15:19.607386  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.225693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.607691  106300 preemption_test.go:567] Creating additional pods...
I0319 16:15:19.610213  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.218762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.612658  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.97416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.616011  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.912527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.618380  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.967255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.620738  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.898308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.622477  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:19.622501  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:19.622621  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.622664  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.623448  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.207554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.624450  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (954.836µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36788]
I0319 16:15:19.624836  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/status: (1.910424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.625215  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.722191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.626111  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.202991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36650]
I0319 16:15:19.626553  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.303827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.626813  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.627808  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.365231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.629603  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/status: (2.408268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.629916  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.761841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.632096  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.774302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.633551  106300 wrap.go:47] DELETE /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/rpod-1: (3.541884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.633844  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:19.633905  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:19.634140  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.634184  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.634195  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.604486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.635810  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.121261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.635967  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0/status: (1.586851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36788]
I0319 16:15:19.636565  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.435991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.637573  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.235911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.637846  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.638017  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:19.638039  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:19.638191  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.638243  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.638318  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.571874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36794]
I0319 16:15:19.638598  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.382792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.640336  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1/status: (1.851787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36792]
I0319 16:15:19.640564  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (2.148803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.640757  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.058733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36794]
I0319 16:15:19.641210  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.208979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.642248  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.308741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36792]
I0319 16:15:19.642575  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.642681  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.514938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36794]
I0319 16:15:19.642738  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:19.642756  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:19.642858  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.642935  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.644877  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.385181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36796]
I0319 16:15:19.644907  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.810561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36660]
I0319 16:15:19.644926  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2/status: (1.766256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36790]
I0319 16:15:19.644928  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.399588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36798]
I0319 16:15:19.646540  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.208038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36798]
I0319 16:15:19.646814  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.646898  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.512385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36796]
I0319 16:15:19.647020  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:19.647040  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:19.647181  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.647224  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.649228  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.439615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0319 16:15:19.649539  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.238303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36796]
I0319 16:15:19.649876  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (2.224217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36800]
I0319 16:15:19.649971  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3/status: (2.387405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36798]
I0319 16:15:19.652129  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.756993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36796]
I0319 16:15:19.652395  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.652747  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:19.652765  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:19.652877  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.652929  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.655261  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.668883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0319 16:15:19.655755  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (2.577866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0319 16:15:19.656109  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4/status: (2.935209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36796]
I0319 16:15:19.657809  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.278928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0319 16:15:19.658101  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.658271  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:19.658282  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:19.658360  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.658513  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.658563  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.836575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0319 16:15:19.660014  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.084801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0319 16:15:19.660840  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.612151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.661266  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5/status: (2.303774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36802]
I0319 16:15:19.662917  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.071846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0319 16:15:19.663755  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.993085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36808]
I0319 16:15:19.664012  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.664280  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:19.664297  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:19.664391  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.664437  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.666627  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.479055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36810]
I0319 16:15:19.667797  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (2.662764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.670009  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6/status: (5.32862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36808]
I0319 16:15:19.670446  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (6.509165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0319 16:15:19.672115  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.310789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.672330  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.672590  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:19.672605  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:19.672707  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.672746  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.676517  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.108235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36812]
I0319 16:15:19.677143  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (5.415504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36804]
I0319 16:15:19.677323  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (4.13066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36810]
I0319 16:15:19.677727  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7/status: (4.741458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.679838  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.610199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.680078  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.116582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36810]
I0319 16:15:19.680141  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.680317  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:19.680340  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:19.680450  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.680554  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.682924  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.269797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.682935  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (2.226562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36812]
I0319 16:15:19.684040  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.506079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0319 16:15:19.684699  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8/status: (2.854585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36816]
I0319 16:15:19.686054  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.304324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36812]
I0319 16:15:19.686622  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.426457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0319 16:15:19.686832  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.687644  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:19.687668  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:19.687801  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.687849  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.689796  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.251686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.690319  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.735297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36818]
I0319 16:15:19.690385  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9/status: (2.281918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0319 16:15:19.690617  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.945573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36812]
I0319 16:15:19.692845  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.436151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0319 16:15:19.693107  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.693108  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.023053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36818]
I0319 16:15:19.693269  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:19.693285  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:19.693376  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.693434  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.695580  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.604091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36822]
I0319 16:15:19.695719  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10/status: (2.074584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.695735  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.846329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.695812  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.196353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36814]
I0319 16:15:19.697337  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.232872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.697585  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.697773  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:19.697793  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:19.697904  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.697961  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.698177  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.886641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.699935  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.483574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.699969  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11/status: (1.780062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.700300  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.761397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.700498  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.763902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36826]
I0319 16:15:19.701405  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.09132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.701645  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.701852  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:19.701875  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:19.701993  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.702132  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.702567  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.697942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.704720  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12/status: (2.368546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.704763  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.797845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.705438  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (2.552139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.707058  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.809023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.707303  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.707394  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.897823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36828]
I0319 16:15:19.707729  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:19.707748  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:19.707986  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.708034  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.708688  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.434866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.709484  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.201973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.711623  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.427151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.711826  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (3.554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36820]
I0319 16:15:19.712232  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.712444  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:19.712478  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:19.712594  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.712638  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.713002  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-1.158d684b6ae9c63d: (3.966933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36830]
I0319 16:15:19.714310  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.087859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.717672  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (4.309944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36832]
I0319 16:15:19.718424  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (3.696923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.718590  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13/status: (5.39044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36806]
I0319 16:15:19.719288  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (5.724577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36830]
I0319 16:15:19.720730  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.63668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.721073  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.47589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36830]
I0319 16:15:19.721326  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.721512  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:19.721535  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:19.721641  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.721686  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.723223  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.090344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36832]
I0319 16:15:19.723448  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.723680  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:19.723704  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:19.723807  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.207788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.723811  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.723856  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.724018  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.700814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.725188  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.185823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.725734  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14/status: (1.581752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36832]
I0319 16:15:19.725914  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.607905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.727398  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.189732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.727672  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.727930  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:19.727948  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:19.728027  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.728072  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.728097  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.806275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.728765  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-2.158d684b6b313c04: (6.191883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36836]
I0319 16:15:19.729507  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.102302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36838]
I0319 16:15:19.729836  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15/status: (1.577173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.730147  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.370506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.730564  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.368669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36836]
I0319 16:15:19.731677  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (995.393µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.732060  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.732285  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:19.732304  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:19.732405  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.732449  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.732839  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.587628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36838]
I0319 16:15:19.732911  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.692908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.734042  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.27816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.734269  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.734274  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.164024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36840]
I0319 16:15:19.734489  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:19.734510  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:19.734677  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.734737  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.734838  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.463119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36838]
I0319 16:15:19.736056  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-3.158d684b6b72cc58: (2.693058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.736937  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.684644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36838]
I0319 16:15:19.737034  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16/status: (2.084698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36840]
I0319 16:15:19.737162  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (2.176848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.738507  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.767696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.738629  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.261484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36824]
I0319 16:15:19.738845  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.739005  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:19.739019  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.66986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36840]
I0319 16:15:19.739025  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:19.739150  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.739197  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.740495  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.141705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.740551  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.187843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36838]
I0319 16:15:19.740955  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.741158  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:19.741176  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:19.741299  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.630987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36842]
I0319 16:15:19.741314  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.741352  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.742599  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-4.158d684b6bc9d819: (2.479884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36844]
I0319 16:15:19.743598  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.51369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36842]
I0319 16:15:19.743806  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.601323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.744087  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.166056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36844]
I0319 16:15:19.745259  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17/status: (3.479239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36838]
I0319 16:15:19.745424  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.437974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36842]
I0319 16:15:19.746789  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.11989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36844]
I0319 16:15:19.747041  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.747242  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:19.747327  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:19.747382  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.53944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.747480  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.747531  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.748882  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (1.014485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.749748  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.711483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36848]
I0319 16:15:19.751001  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (2.998271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36846]
I0319 16:15:19.751001  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18/status: (3.236657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36844]
I0319 16:15:19.753143  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (1.11154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.753234  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods: (1.780475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36848]
I0319 16:15:19.753360  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.753692  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:19.753714  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:19.753828  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.753881  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.755468  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.325132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.755966  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.369219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.756080  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19/status: (1.932377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36848]
I0319 16:15:19.757704  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-19: (1.181976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.758007  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.758196  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:19.758212  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7
I0319 16:15:19.758373  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.758431  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.759795  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.164628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.760011  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.760087  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (1.416295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.760255  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:19.760276  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20
I0319 16:15:19.760381  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.760427  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.761550  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-7.158d684b6cf843bf: (2.240397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36852]
I0319 16:15:19.761796  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.118421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.762403  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20/status: (1.720128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.763318  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.376143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36852]
I0319 16:15:19.764048  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-20: (1.066608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36834]
I0319 16:15:19.764314  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.764519  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:19.764537  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21
I0319 16:15:19.764642  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.764684  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.765994  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.057526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.766876  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.613846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36854]
I0319 16:15:19.767544  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21/status: (2.635069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36852]
I0319 16:15:19.769104  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-21: (1.125393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36854]
I0319 16:15:19.769526  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.769720  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:19.769738  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:19.769818  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.769860  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.771963  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22/status: (1.806667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36854]
I0319 16:15:19.772361  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.548412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.772535  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (1.83077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.773537  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (1.200414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36854]
I0319 16:15:19.773822  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.774019  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:19.774034  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23
I0319 16:15:19.774144  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.774190  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.775563  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.144134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.775970  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23/status: (1.568412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.775991  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.234751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36858]
I0319 16:15:19.777521  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-23: (1.157959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.777740  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.777903  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:19.777922  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24
I0319 16:15:19.778025  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.778079  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.779403  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.067356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.780156  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.538124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36860]
I0319 16:15:19.780168  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24/status: (1.882648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.781608  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-24: (1.053665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.781949  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.782147  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:19.782163  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25
I0319 16:15:19.782250  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.782305  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.783802  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.260644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.784369  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25/status: (1.825169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.784380  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.539241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36862]
I0319 16:15:19.785898  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-25: (1.187737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.786149  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.786631  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:19.786652  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26
I0319 16:15:19.786771  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.786813  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.788357  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.239131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.788829  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26/status: (1.820082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.790258  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.944178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.790621  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-26: (1.098095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36850]
I0319 16:15:19.790864  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.791036  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:19.791056  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27
I0319 16:15:19.791178  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.791225  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.792629  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.060702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.793151  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27/status: (1.715526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.793245  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.372858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36866]
I0319 16:15:19.794640  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-27: (1.125513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.794861  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.795029  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:19.795048  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28
I0319 16:15:19.795187  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.795230  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.796559  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.052898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.797174  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28/status: (1.714416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.797221  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.398086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36868]
I0319 16:15:19.798678  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-28: (1.122823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.798931  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.799107  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:19.799123  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29
I0319 16:15:19.799202  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.799251  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.800674  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.100423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.801050  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29/status: (1.602654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.801436  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.366277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36870]
I0319 16:15:19.802506  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-29: (1.085088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36864]
I0319 16:15:19.802747  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.802898  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:19.802915  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12
I0319 16:15:19.803005  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.803051  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.804503  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.27256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36870]
I0319 16:15:19.804611  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.311308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.804734  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.804880  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:19.804894  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30
I0319 16:15:19.805094  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.805149  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.805907  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-12.158d684b6eb89eb6: (2.225211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36872]
I0319 16:15:19.806433  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.102075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.807093  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30/status: (1.688878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36870]
I0319 16:15:19.808265  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.943878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36872]
I0319 16:15:19.808757  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-30: (1.191643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36870]
I0319 16:15:19.809049  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.809239  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:19.809257  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31
I0319 16:15:19.809354  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.809415  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.811289  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.307323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.811670  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31/status: (1.689473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36872]
I0319 16:15:19.812195  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.15378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36874]
I0319 16:15:19.813100  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-31: (1.079745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36872]
I0319 16:15:19.813393  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.813652  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:19.813668  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32
I0319 16:15:19.813756  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.813793  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.815121  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.080017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.815668  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32/status: (1.660291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36874]
I0319 16:15:19.815757  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.449591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36876]
I0319 16:15:19.817175  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-32: (1.129158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36874]
I0319 16:15:19.817377  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.817554  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:19.817574  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33
I0319 16:15:19.817671  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.817711  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.819010  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.066201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.820232  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.569178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.820402  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33/status: (2.453386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36874]
I0319 16:15:19.821987  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-33: (1.138485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.822282  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.822484  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:19.822501  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34
I0319 16:15:19.822598  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.822646  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.824260  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.404063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.824621  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.513029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0319 16:15:19.824757  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34/status: (1.88091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36856]
I0319 16:15:19.826274  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-34: (1.164405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0319 16:15:19.826583  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.826761  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:19.826777  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:19.826875  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.826918  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.828955  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.540805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.828971  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35/status: (1.828689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0319 16:15:19.829427  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (2.240038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.830478  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (1.070981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36880]
I0319 16:15:19.830750  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.830935  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:19.830951  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36
I0319 16:15:19.831044  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.831102  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.832402  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.090132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.832690  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.364933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.833216  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36/status: (1.702492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36884]
I0319 16:15:19.834708  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-36: (1.05877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.834944  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.835211  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:19.835231  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37
I0319 16:15:19.835355  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.835403  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.836796  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.140118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.837347  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.449649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36886]
I0319 16:15:19.837513  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37/status: (1.857681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36878]
I0319 16:15:19.839128  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-37: (1.146325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36886]
I0319 16:15:19.839384  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.839582  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:19.839601  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38
I0319 16:15:19.839702  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.839752  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.840995  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (974.903µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.841490  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38/status: (1.492584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36886]
I0319 16:15:19.841915  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.648752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36888]
I0319 16:15:19.843039  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-38: (1.197754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36886]
I0319 16:15:19.843283  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.843504  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:19.843525  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39
I0319 16:15:19.843622  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.843659  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.845054  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (1.033244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.845425  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39/status: (1.517233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36888]
I0319 16:15:19.845622  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.408572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36890]
I0319 16:15:19.846906  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-39: (1.074451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36888]
I0319 16:15:19.847154  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.847332  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:19.847348  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40
I0319 16:15:19.847448  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.847512  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.848778  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (991.367µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.850808  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.609074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.851292  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40/status: (3.570651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36888]
I0319 16:15:19.852856  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-40: (1.077195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.853093  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.853265  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:19.853287  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41
I0319 16:15:19.853404  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.853469  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.854821  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.142216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.855029  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.15507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36894]
I0319 16:15:19.855601  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41/status: (1.931921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36882]
I0319 16:15:19.856539  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.331363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36896]
I0319 16:15:19.857125  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-41: (1.123751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36894]
I0319 16:15:19.857388  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.857581  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:19.857612  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16
I0319 16:15:19.857707  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.857742  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.859031  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.049005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.859031  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-16: (1.146033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36896]
I0319 16:15:19.859285  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.859432  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:19.859449  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42
I0319 16:15:19.859592  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.859628  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.860734  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-16.158d684b70aa2945: (2.278995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36898]
I0319 16:15:19.861081  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.213448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.861528  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42/status: (1.680152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36896]
I0319 16:15:19.862632  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.492435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36898]
I0319 16:15:19.863215  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-42: (1.344806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36896]
I0319 16:15:19.863471  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.863648  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:19.863663  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43
I0319 16:15:19.863758  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.863804  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.865083  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (1.047657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.865838  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43/status: (1.809639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36898]
I0319 16:15:19.865894  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.463257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.867589  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-43: (1.189951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.867831  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.868041  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:19.868059  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44
I0319 16:15:19.868182  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.868229  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.869866  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (1.410892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.870356  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44/status: (1.853358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.872187  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-44: (1.354707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.872307  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (3.097274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36902]
I0319 16:15:19.872442  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.872730  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:19.872755  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45
I0319 16:15:19.872853  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.872897  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.874322  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.134293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.874865  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45/status: (1.693827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.875336  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.729432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36904]
I0319 16:15:19.876876  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-45: (1.342239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36892]
I0319 16:15:19.877215  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.877388  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:19.877406  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46
I0319 16:15:19.877549  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.877602  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.878883  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.041877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.879653  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.498786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36906]
I0319 16:15:19.879776  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46/status: (1.937892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36904]
I0319 16:15:19.881449  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-46: (1.284678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36906]
I0319 16:15:19.881736  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.881919  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:19.881962  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47
I0319 16:15:19.882074  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.882120  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.883497  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.09511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.884104  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47/status: (1.673391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36906]
I0319 16:15:19.884264  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.40786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36908]
I0319 16:15:19.885603  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-47: (1.124161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36906]
I0319 16:15:19.885875  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.886096  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:19.886114  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48
I0319 16:15:19.886217  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.886281  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.887778  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (1.242889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.888295  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (1.395595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.888437  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48/status: (1.910466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36906]
I0319 16:15:19.890476  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-48: (1.443962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.890752  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.890938  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:19.890984  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49
I0319 16:15:19.891137  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.891189  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.892523  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (1.10618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.893974  106300 wrap.go:47] PUT /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49/status: (2.557881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.894028  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.200276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36912]
I0319 16:15:19.899955  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-49: (5.439326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.900379  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.900608  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:19.900625  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22
I0319 16:15:19.900720  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.900763  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.904297  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (2.704329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.904788  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-22: (3.82825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.905099  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.905268  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:19.905285  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35
I0319 16:15:19.905413  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:19.905468  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:19.924489  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (18.669603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:19.924548  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-35: (18.700865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.924912  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:19.925560  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-22.158d684b72c21ddd: (23.910984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36938]
I0319 16:15:19.930826  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-35.158d684b7628c82f: (4.557804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:19.958099  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.310541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.057444  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.860212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.157542  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.956438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.257439  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.938405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.357666  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.146155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.457599  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.036407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.551647  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:20.554263  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:20.555413  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:20.555983  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:20.557237  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.689001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.557441  106300 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0319 16:15:20.657400  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.78789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.757747  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.137907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.857718  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.193557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:20.957801  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.172101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.057651  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (2.005123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.157602  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.985554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.257229  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.721921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.357366  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.788292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.444589  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:21.444628  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod
I0319 16:15:21.444853  106300 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod", node "node1"
I0319 16:15:21.444875  106300 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0319 16:15:21.444932  106300 factory.go:733] Attempting to bind preemptor-pod to node1
I0319 16:15:21.445022  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:21.445049  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0
I0319 16:15:21.445209  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.445272  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.447812  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.197058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.448183  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod/binding: (2.84133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.447812  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (1.998049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37236]
I0319 16:15:21.448584  106300 scheduler.go:572] pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0319 16:15:21.448660  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.448856  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.448977  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:21.448992  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5
I0319 16:15:21.449142  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.449185  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.450738  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-0.158d684b6aabd63c: (4.687148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.451493  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (1.999459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.451869  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.451963  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.565403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.452199  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.452371  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:21.452390  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6
I0319 16:15:21.452494  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.452536  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.453220  106300 wrap.go:47] POST /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events: (2.057046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.454004  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.279424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.454620  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.454802  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:21.454818  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8
I0319 16:15:21.454831  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (2.047776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.454911  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.454961  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.455094  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.456920  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.795047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.457073  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (1.851933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.457201  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.457325  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.457363  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:21.457386  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9
I0319 16:15:21.457429  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/preemptor-pod: (1.31208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37242]
I0319 16:15:21.457515  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.457555  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.457740  106300 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0319 16:15:21.457984  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-5.158d684b6c1e3a05: (4.099341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.458954  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.181136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.459204  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.460597  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (2.885447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.460841  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.460916  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-0: (2.799028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37244]
I0319 16:15:21.461014  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:21.461032  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10
I0319 16:15:21.461143  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.461205  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.461965  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-6.158d684b6c79792a: (2.891999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.462623  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.110035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.462944  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.463017  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.297452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37246]
I0319 16:15:21.463032  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.650956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.463133  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:21.463143  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11
I0319 16:15:21.463229  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.463266  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.463333  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.464477  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.056963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.464499  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.141666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.464711  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.464883  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:21.464898  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1
I0319 16:15:21.464970  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.465005  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.465878  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.043002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.465935  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (2.327719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37248]
I0319 16:15:21.466205  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.466347  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-8.158d684b6d6f4720: (3.699627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.467481  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (1.919384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36910]
I0319 16:15:21.467807  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.526071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37248]
I0319 16:15:21.467807  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-1: (2.194896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.468052  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.468133  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.468585  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:21.468609  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13
I0319 16:15:21.468706  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.468749  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.469640  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-9.158d684b6ddead1a: (2.57743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.470364  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-5: (2.16763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.471645  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (2.348579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.471774  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.442368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37238]
I0319 16:15:21.471912  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.472127  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:21.472141  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2
I0319 16:15:21.472170  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.472236  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.472273  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.472908  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-6: (1.695402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.473921  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-10.158d684b6e33c67b: (3.153367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37254]
I0319 16:15:21.474019  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.508161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.474092  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-2: (1.533151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.474698  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.474876  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.475092  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:21.475117  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14
I0319 16:15:21.475212  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.475253  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.475939  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-7: (2.518675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.476913  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.306939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.477154  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.477278  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-8: (967.341µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.477356  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:21.477370  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15
I0319 16:15:21.477469  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.477499  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (1.34525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.477510  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.477710  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.478959  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.226022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.478974  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-15: (1.220811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.479193  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.479210  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.479359  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:21.479376  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3
I0319 16:15:21.479471  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.479503  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-9: (1.774643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.479514  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.481055  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-10: (1.214707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.481058  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.405406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.481497  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.481621  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-3: (1.8886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.481692  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:21.481720  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4
I0319 16:15:21.481875  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.481926  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.482202  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.482322  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-11.158d684b6e78ffc8: (7.801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37254]
I0319 16:15:21.482534  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-11: (1.150469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.483487  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.039196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.483512  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-4: (1.391787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.483742  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.483930  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:21.483972  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17
I0319 16:15:21.484000  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-12: (1.117561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.484111  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.484192  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.484610  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.487251  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.919477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:36900]
I0319 16:15:21.487291  106300 wrap.go:47] PATCH /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/events/ppod-1.158d684b6ae9c63d: (3.984328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37254]
I0319 16:15:21.487333  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-17: (1.783039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.487529  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.487569  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.487710  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:21.487722  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18
I0319 16:15:21.487816  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.487849  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0319 16:15:21.488211  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-13: (1.346892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.490834  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (2.668282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37250]
I0319 16:15:21.491157  106300 generic_scheduler.go:1118] Node node1 is a potential node for preemption.
I0319 16:15:21.491708  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-18: (3.106848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37252]
I0319 16:15:21.492126  106300 wrap.go:47] GET /api/v1/namespaces/preemption-race2c71566e-4a62-11e9-b474-0242ac110002/pods/ppod-14: (3.452491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:37284]
I0319 16:15:21.492702  106300 backoff_utils.go:79] Backing off 2s
I0319 16:15:21.493715  106300 scheduling_queue.go:908] About to try and schedule pod preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:21.493734  106300 scheduler.go:453] Attempting to schedule pod: preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19
I0319 16:15:21.493841  106300 factory.go:647] Unable to schedule preemption-race2c71566e-4a62-11e9-b474-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0319 16:15:21.493883  106300 factory.go:742] Updating pod condition for preemption-race2c71566e-4a62-11e9-b474-0242ac