This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 665 succeeded
Started2019-03-20 23:24
Elapsed26m49s
Revision
Buildergke-prow-containerd-pool-99179761-lbfz
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ac64ea6c-483b-45a4-bee6-828aeb15faf5/targets/test'}}
pod3a019cef-4b67-11e9-a14c-0a580a6c09d0
resultstorehttps://source.cloud.google.com/results/invocations/ac64ea6c-483b-45a4-bee6-828aeb15faf5/targets/test
infra-commitff8e567a0
pod3a019cef-4b67-11e9-a14c-0a580a6c09d0
repok8s.io/kubernetes
repo-commit4940eae478248670cbed1bcde15def96229b5c7e
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 29s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0320 23:44:26.999114  106048 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0320 23:44:26.999144  106048 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0320 23:44:26.999156  106048 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0320 23:44:26.999173  106048 master.go:233] Using reconciler: 
I0320 23:44:27.000947  106048 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.001073  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.001089  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.001130  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.001200  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.001525  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.001600  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.001665  106048 store.go:1319] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0320 23:44:27.001695  106048 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.001735  106048 reflector.go:161] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0320 23:44:27.001879  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.001889  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.001918  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.001980  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.002266  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.002337  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.002339  106048 store.go:1319] Monitoring events count at <storage-prefix>//events
I0320 23:44:27.002390  106048 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.002461  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.002469  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.002502  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.002552  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.002785  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.002867  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.002908  106048 store.go:1319] Monitoring limitranges count at <storage-prefix>//limitranges
I0320 23:44:27.002934  106048 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.002959  106048 reflector.go:161] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0320 23:44:27.002997  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.003007  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.003035  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.003192  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.003458  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.003554  106048 store.go:1319] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0320 23:44:27.003673  106048 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.003729  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.003742  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.003771  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.003807  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.003833  106048 reflector.go:161] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0320 23:44:27.003986  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.004283  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.004344  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.004416  106048 store.go:1319] Monitoring secrets count at <storage-prefix>//secrets
I0320 23:44:27.004496  106048 reflector.go:161] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0320 23:44:27.004562  106048 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.004710  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.004755  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.004803  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.004860  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.005087  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.005138  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.005327  106048 store.go:1319] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0320 23:44:27.005472  106048 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.005504  106048 reflector.go:161] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0320 23:44:27.005533  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.005548  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.005577  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.005644  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.005860  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.005889  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.005988  106048 store.go:1319] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0320 23:44:27.006030  106048 reflector.go:161] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0320 23:44:27.006167  106048 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.006255  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.006271  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.006304  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.006408  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.006674  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.006822  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.007084  106048 store.go:1319] Monitoring configmaps count at <storage-prefix>//configmaps
I0320 23:44:27.007118  106048 reflector.go:161] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0320 23:44:27.007326  106048 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.007446  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.007488  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.007525  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.007611  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.007911  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.007992  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.008031  106048 store.go:1319] Monitoring namespaces count at <storage-prefix>//namespaces
I0320 23:44:27.008173  106048 reflector.go:161] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0320 23:44:27.008255  106048 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.008329  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.008348  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.008378  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.008457  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.008790  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.008852  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.008898  106048 store.go:1319] Monitoring endpoints count at <storage-prefix>//endpoints
I0320 23:44:27.008938  106048 reflector.go:161] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0320 23:44:27.009113  106048 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.009212  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.009231  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.009271  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.009364  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.009608  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.009704  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.009760  106048 store.go:1319] Monitoring nodes count at <storage-prefix>//nodes
I0320 23:44:27.009779  106048 reflector.go:161] Listing and watching *core.Node from storage/cacher.go:/nodes
I0320 23:44:27.009960  106048 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.010026  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.010035  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.010085  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.010171  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.010503  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.010677  106048 store.go:1319] Monitoring pods count at <storage-prefix>//pods
I0320 23:44:27.010798  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.010864  106048 reflector.go:161] Listing and watching *core.Pod from storage/cacher.go:/pods
I0320 23:44:27.011254  106048 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.011507  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.012438  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.012596  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.012755  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.014407  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.014497  106048 store.go:1319] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0320 23:44:27.014624  106048 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.014681  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.014696  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.014736  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.014793  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.014819  106048 reflector.go:161] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0320 23:44:27.015009  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.016355  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.016477  106048 store.go:1319] Monitoring services count at <storage-prefix>//services
I0320 23:44:27.016502  106048 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.016590  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.016601  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.016630  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.016667  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.016695  106048 reflector.go:161] Listing and watching *core.Service from storage/cacher.go:/services
I0320 23:44:27.016880  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.017280  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.017364  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.017380  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.017403  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.017456  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.017515  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.017917  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.018100  106048 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.018179  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.018196  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.018254  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.018308  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.018354  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.018582  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.018612  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.018757  106048 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0320 23:44:27.019137  106048 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0320 23:44:27.037250  106048 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0320 23:44:27.037340  106048 master.go:425] Enabling API group "authentication.k8s.io".
I0320 23:44:27.037371  106048 master.go:425] Enabling API group "authorization.k8s.io".
I0320 23:44:27.037559  106048 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.037722  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.037762  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.037815  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.037871  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.038227  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.038370  106048 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0320 23:44:27.038573  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.038637  106048 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0320 23:44:27.038825  106048 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.038944  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.038962  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.039020  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.039102  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.040025  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.040085  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.040156  106048 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0320 23:44:27.040200  106048 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0320 23:44:27.040285  106048 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.040352  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.040363  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.040393  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.040487  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.041127  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.041181  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.041225  106048 store.go:1319] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0320 23:44:27.041239  106048 master.go:425] Enabling API group "autoscaling".
I0320 23:44:27.041373  106048 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.041478  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.041489  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.041516  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.041542  106048 reflector.go:161] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0320 23:44:27.041684  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.041917  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.042010  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.042073  106048 store.go:1319] Monitoring jobs.batch count at <storage-prefix>//jobs
I0320 23:44:27.042209  106048 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.042230  106048 reflector.go:161] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0320 23:44:27.042273  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.042282  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.042318  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.042441  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.042666  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.042707  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.043367  106048 store.go:1319] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0320 23:44:27.043394  106048 master.go:425] Enabling API group "batch".
I0320 23:44:27.043442  106048 reflector.go:161] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0320 23:44:27.043535  106048 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.043594  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.043605  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.043663  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.043736  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.043987  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.044019  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.044130  106048 store.go:1319] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0320 23:44:27.044150  106048 master.go:425] Enabling API group "certificates.k8s.io".
I0320 23:44:27.044266  106048 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.044359  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.044370  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.044401  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.044451  106048 reflector.go:161] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0320 23:44:27.044594  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.046257  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.046346  106048 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0320 23:44:27.046446  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.046490  106048 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.046522  106048 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0320 23:44:27.046543  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.046553  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.046600  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.046647  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.047100  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.047183  106048 store.go:1319] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0320 23:44:27.047198  106048 master.go:425] Enabling API group "coordination.k8s.io".
I0320 23:44:27.047315  106048 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.047372  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.047382  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.047412  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.047439  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.047464  106048 reflector.go:161] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0320 23:44:27.047576  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.047984  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.048080  106048 store.go:1319] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0320 23:44:27.048108  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.048147  106048 reflector.go:161] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0320 23:44:27.048207  106048 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.048272  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.048281  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.048314  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.048408  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.048703  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.048737  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.048867  106048 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0320 23:44:27.049008  106048 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.049078  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.049087  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.049120  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.049175  106048 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0320 23:44:27.049323  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.049668  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.049714  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.049786  106048 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 23:44:27.049931  106048 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.049988  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.049997  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.050032  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.050119  106048 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 23:44:27.050248  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.050543  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.050653  106048 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0320 23:44:27.050857  106048 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.050919  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.050929  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.050955  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.050990  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.051028  106048 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0320 23:44:27.051075  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.051629  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.051713  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.051742  106048 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0320 23:44:27.051830  106048 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0320 23:44:27.051895  106048 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.051959  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.051968  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.052026  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.052140  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.052371  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.052411  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.052514  106048 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0320 23:44:27.052675  106048 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.052740  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.052750  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.052962  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.053203  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.053325  106048 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0320 23:44:27.053568  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.053608  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.053763  106048 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0320 23:44:27.053793  106048 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0320 23:44:27.053777  106048 master.go:425] Enabling API group "extensions".
I0320 23:44:27.054021  106048 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.054132  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.054143  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.054208  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.054264  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.054518  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.054599  106048 store.go:1319] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0320 23:44:27.054728  106048 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.054824  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.054835  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.054866  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.054914  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.055043  106048 reflector.go:161] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0320 23:44:27.055188  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.055461  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.055480  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.055561  106048 store.go:1319] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0320 23:44:27.055572  106048 master.go:425] Enabling API group "networking.k8s.io".
I0320 23:44:27.055598  106048 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.055604  106048 reflector.go:161] Listing and watching *networking.Ingress from storage/cacher.go:/ingresses
I0320 23:44:27.055671  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.055682  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.055714  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.055844  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.056120  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.056144  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.056196  106048 store.go:1319] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0320 23:44:27.056206  106048 master.go:425] Enabling API group "node.k8s.io".
I0320 23:44:27.056369  106048 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.056462  106048 reflector.go:161] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0320 23:44:27.056483  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.056492  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.056518  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.056642  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.062986  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.063076  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.063181  106048 store.go:1319] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0320 23:44:27.063245  106048 reflector.go:161] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0320 23:44:27.063367  106048 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.063451  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.063463  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.063492  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.063705  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.065003  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.065079  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.065168  106048 store.go:1319] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0320 23:44:27.065190  106048 master.go:425] Enabling API group "policy".
I0320 23:44:27.065224  106048 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.065293  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.065308  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.065238  106048 reflector.go:161] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0320 23:44:27.065336  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.065379  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.065647  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.065774  106048 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0320 23:44:27.065888  106048 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.065953  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.065964  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.065990  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.066042  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.066105  106048 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0320 23:44:27.066213  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.066492  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.066752  106048 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0320 23:44:27.066771  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.066791  106048 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.066830  106048 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0320 23:44:27.066857  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.066871  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.066902  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.067012  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.067275  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.067357  106048 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0320 23:44:27.067559  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.067669  106048 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0320 23:44:27.067660  106048 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.067735  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.067744  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.067797  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.068355  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.069346  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.069444  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.069486  106048 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0320 23:44:27.069517  106048 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0320 23:44:27.069520  106048 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.069575  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.069584  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.069609  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.069670  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.070007  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.070128  106048 store.go:1319] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0320 23:44:27.070286  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.070276  106048 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.070359  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.070372  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.070380  106048 reflector.go:161] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0320 23:44:27.070417  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.070494  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.070868  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.070898  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.070945  106048 store.go:1319] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0320 23:44:27.070981  106048 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.071042  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.071068  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.071117  106048 reflector.go:161] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0320 23:44:27.071123  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.071248  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.071638  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.071721  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.072528  106048 store.go:1319] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0320 23:44:27.072717  106048 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.072810  106048 reflector.go:161] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0320 23:44:27.072830  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.072843  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.072870  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.072940  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.073213  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.073273  106048 store.go:1319] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0320 23:44:27.073284  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.073291  106048 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0320 23:44:27.073841  106048 reflector.go:161] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0320 23:44:27.075556  106048 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.075622  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.075631  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.075659  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.075732  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.076045  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.076162  106048 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0320 23:44:27.076318  106048 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.076392  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.076412  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.076473  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.076514  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.076543  106048 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0320 23:44:27.076810  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.077144  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.077228  106048 store.go:1319] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0320 23:44:27.077244  106048 master.go:425] Enabling API group "scheduling.k8s.io".
I0320 23:44:27.077373  106048 master.go:417] Skipping disabled API group "settings.k8s.io".
I0320 23:44:27.077535  106048 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.077611  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.077625  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.077658  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.077726  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.077757  106048 reflector.go:161] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0320 23:44:27.077902  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.078185  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.078258  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.078261  106048 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0320 23:44:27.078281  106048 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0320 23:44:27.078465  106048 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.078552  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.078562  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.078609  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.078734  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.078941  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.079001  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.079015  106048 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0320 23:44:27.079043  106048 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.079141  106048 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0320 23:44:27.079145  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.079348  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.079374  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.079448  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.080114  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.080257  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.080767  106048 store.go:1319] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0320 23:44:27.080915  106048 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.081102  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.081123  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.081183  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.080844  106048 reflector.go:161] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0320 23:44:27.081310  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.081607  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.081755  106048 store.go:1319] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0320 23:44:27.082032  106048 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.082131  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.082174  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.082210  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.082293  106048 reflector.go:161] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0320 23:44:27.082647  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.084255  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.084351  106048 store.go:1319] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0320 23:44:27.084508  106048 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.084580  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.084592  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.084618  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.084653  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.084680  106048 reflector.go:161] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0320 23:44:27.084785  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.084981  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.085078  106048 store.go:1319] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0320 23:44:27.085092  106048 master.go:425] Enabling API group "storage.k8s.io".
I0320 23:44:27.085125  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.085214  106048 reflector.go:161] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0320 23:44:27.085256  106048 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.086446  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.086482  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.086560  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.086600  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.087128  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.087322  106048 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 23:44:27.087888  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.087934  106048 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 23:44:27.089491  106048 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.089649  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.089681  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.089711  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.089790  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.090131  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.090208  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.090348  106048 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0320 23:44:27.090405  106048 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0320 23:44:27.090522  106048 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.090587  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.090596  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.090623  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.090697  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.090923  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.091105  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.091667  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.095813  106048 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0320 23:44:27.096093  106048 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0320 23:44:27.096274  106048 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.096437  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.096478  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.096541  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.096635  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.097221  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.097306  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.097394  106048 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 23:44:27.097464  106048 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 23:44:27.097916  106048 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.098100  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.098150  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.098196  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.098271  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.098749  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.098975  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.099230  106048 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0320 23:44:27.099266  106048 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0320 23:44:27.099518  106048 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.099785  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.099812  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.099872  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.099958  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.101117  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.101260  106048 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0320 23:44:27.101415  106048 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.101490  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.101503  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.101537  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.101621  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.101652  106048 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0320 23:44:27.101818  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.102224  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.102393  106048 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0320 23:44:27.102564  106048 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.102662  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.102688  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.102735  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.102789  106048 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0320 23:44:27.102836  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.103008  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.103458  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.103552  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.103676  106048 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0320 23:44:27.103799  106048 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0320 23:44:27.104089  106048 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.104163  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.104178  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.104232  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.104312  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.104624  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.104748  106048 store.go:1319] Monitoring deployments.apps count at <storage-prefix>//deployments
I0320 23:44:27.104911  106048 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.105001  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.105019  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.105079  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.105209  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.105228  106048 reflector.go:161] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0320 23:44:27.105355  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.105596  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.105677  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.105721  106048 store.go:1319] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0320 23:44:27.105827  106048 reflector.go:161] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0320 23:44:27.105856  106048 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.105919  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.105928  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.105981  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.106046  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.106337  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.106375  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.106475  106048 store.go:1319] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0320 23:44:27.106507  106048 reflector.go:161] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0320 23:44:27.106695  106048 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.106770  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.106782  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.106810  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.106888  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.107106  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.107188  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.107224  106048 store.go:1319] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0320 23:44:27.107257  106048 reflector.go:161] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0320 23:44:27.107398  106048 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.107510  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.107520  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.107546  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.107584  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.107927  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.108013  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.108061  106048 store.go:1319] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0320 23:44:27.108080  106048 master.go:425] Enabling API group "apps".
I0320 23:44:27.108132  106048 reflector.go:161] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0320 23:44:27.108107  106048 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.108446  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.108467  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.108492  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.108795  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.110998  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.112286  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.112377  106048 store.go:1319] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0320 23:44:27.112409  106048 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.112484  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.112501  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.112528  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.112584  106048 reflector.go:161] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0320 23:44:27.112775  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.113169  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.113272  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.113371  106048 store.go:1319] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0320 23:44:27.113392  106048 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0320 23:44:27.113417  106048 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9f8678fb-66ef-48f4-9aa2-2686f067de20", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0320 23:44:27.113445  106048 reflector.go:161] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0320 23:44:27.113647  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.113665  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.113693  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.113764  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.114145  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.114200  106048 store.go:1319] Monitoring events count at <storage-prefix>//events
I0320 23:44:27.114225  106048 master.go:425] Enabling API group "events.k8s.io".
I0320 23:44:27.114247  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0320 23:44:27.125554  106048 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
W0320 23:44:27.144729  106048 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0320 23:44:27.151412  106048 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0320 23:44:27.152629  106048 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0320 23:44:27.156023  106048 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0320 23:44:27.174124  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.174153  106048 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0320 23:44:27.174163  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.174173  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.174182  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.174495  106048 wrap.go:47] GET /healthz: (508.328µs) 500
goroutine 29300 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0118c9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0118c9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f5880a0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116edca8, 0xc00c0484e0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc500)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116edca8, 0xc0118cc500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d05a0, 0xc00e689ac0, 0x75f60a0, 0xc0116edca8, 0xc0118cc500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.175518  106048 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.376006ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42582]
I0320 23:44:27.177985  106048 wrap.go:47] GET /api/v1/services: (1.155794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42582]
I0320 23:44:27.181594  106048 wrap.go:47] GET /api/v1/services: (913.823µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42582]
I0320 23:44:27.183915  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.183941  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.183952  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.183961  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.184130  106048 wrap.go:47] GET /healthz: (291.501µs) 500
goroutine 29294 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01095cd20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01095cd20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f5ed240, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc01197e0a8, 0xc00e312f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc01197e0a8, 0xc011992000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc01197e0a8, 0xc010511f00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc01197e0a8, 0xc010511f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011942d80, 0xc00e689ac0, 0x75f60a0, 0xc01197e0a8, 0xc010511f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.185110  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (887.251µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42582]
I0320 23:44:27.187484  106048 wrap.go:47] GET /api/v1/services: (1.146451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:27.187621  106048 wrap.go:47] GET /api/v1/services: (2.01798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.188085  106048 wrap.go:47] POST /api/v1/namespaces: (2.470238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42582]
I0320 23:44:27.190499  106048 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.310948ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.192358  106048 wrap.go:47] POST /api/v1/namespaces: (1.425123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.193613  106048 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (924.554µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.195283  106048 wrap.go:47] POST /api/v1/namespaces: (1.314798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.275415  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.275460  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.275471  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.275557  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.275733  106048 wrap.go:47] GET /healthz: (534.282µs) 500
goroutine 29302 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0118c9ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0118c9ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f5890c0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116edd78, 0xc01191c300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd000)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116edd78, 0xc0118cd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d0720, 0xc00e689ac0, 0x75f60a0, 0xc0116edd78, 0xc0118cd000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.285159  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.285197  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.285225  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.285235  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.285405  106048 wrap.go:47] GET /healthz: (388.766µs) 500
goroutine 29323 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b46e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b46e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60e080, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc004173dd8, 0xc003c3aa80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0b00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc004173dd8, 0xc00eda0b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010cb3980, 0xc00e689ac0, 0x75f60a0, 0xc004173dd8, 0xc00eda0b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.375347  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.375386  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.375398  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.375407  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.375574  106048 wrap.go:47] GET /healthz: (362.789µs) 500
goroutine 29304 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0118c9f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0118c9f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f589180, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116edd80, 0xc01191c780, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116edd80, 0xc0118cd400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d07e0, 0xc00e689ac0, 0x75f60a0, 0xc0116edd80, 0xc0118cd400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.385269  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.385311  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.385321  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.385329  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.385495  106048 wrap.go:47] GET /healthz: (353.133µs) 500
goroutine 29325 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b46f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b46f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60e160, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc004173e00, 0xc003c3af00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc004173e00, 0xc00eda1100)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc004173e00, 0xc00eda1100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010cb3b00, 0xc00e689ac0, 0x75f60a0, 0xc004173e00, 0xc00eda1100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.475360  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.475396  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.475406  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.475437  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.475605  106048 wrap.go:47] GET /healthz: (382.088µs) 500
goroutine 29327 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b47030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b47030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60e380, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc004173e08, 0xc003c3b800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc004173e08, 0xc00eda1500)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc004173e08, 0xc00eda1500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010cb3c20, 0xc00e689ac0, 0x75f60a0, 0xc004173e08, 0xc00eda1500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.490262  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.490300  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.490312  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.490321  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.490513  106048 wrap.go:47] GET /healthz: (380.724µs) 500
goroutine 29353 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01095d960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01095d960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f61c260, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc01197e2a0, 0xc011a5e000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc01197e2a0, 0xc011a2c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011943f20, 0xc00e689ac0, 0x75f60a0, 0xc01197e2a0, 0xc011a2c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.575390  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.575440  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.575453  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.575467  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.575633  106048 wrap.go:47] GET /healthz: (392.95µs) 500
goroutine 29355 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01095da40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01095da40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f61c360, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc01197e2c8, 0xc011a5e480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2ca00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc01197e2c8, 0xc011a2ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011a740c0, 0xc00e689ac0, 0x75f60a0, 0xc01197e2c8, 0xc011a2ca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.585169  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.585211  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.585223  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.585232  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.585384  106048 wrap.go:47] GET /healthz: (348.98µs) 500
goroutine 29329 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b47110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b47110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60e780, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc004173e30, 0xc011a8c000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc004173e30, 0xc00eda1c00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc004173e30, 0xc00eda1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010cb3e60, 0xc00e689ac0, 0x75f60a0, 0xc004173e30, 0xc00eda1c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.675651  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.675687  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.675699  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.675707  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.675849  106048 wrap.go:47] GET /healthz: (339.803µs) 500
goroutine 29306 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011a9e000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011a9e000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f589280, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116edda8, 0xc01191cc00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdb00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116edda8, 0xc0118cdb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d0960, 0xc00e689ac0, 0x75f60a0, 0xc0116edda8, 0xc0118cdb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.685286  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.685320  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.685331  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.685343  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.685497  106048 wrap.go:47] GET /healthz: (376.114µs) 500
goroutine 29382 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f8a3e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f8a3e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00edf5fe0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00ee3d2b0, 0xc00e0aca80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969200)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00ee3d2b0, 0xc011969200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0119d8180, 0xc00e689ac0, 0x75f60a0, 0xc00ee3d2b0, 0xc011969200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.775374  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.775418  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.775443  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.775451  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.775592  106048 wrap.go:47] GET /healthz: (359.049µs) 500
goroutine 29384 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f8a3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f8a3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f6900e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00ee3d2d8, 0xc00e0acf00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969800)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00ee3d2d8, 0xc011969800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0119d8300, 0xc00e689ac0, 0x75f60a0, 0xc00ee3d2d8, 0xc011969800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.785119  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.785154  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.785167  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.785176  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.785321  106048 wrap.go:47] GET /healthz: (355.069µs) 500
goroutine 29357 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01095db20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01095db20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f61c7e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc01197e310, 0xc011a5ec00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc01197e310, 0xc011a2d300)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc01197e310, 0xc011a2d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011a743c0, 0xc00e689ac0, 0x75f60a0, 0xc01197e310, 0xc011a2d300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.875387  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.875432  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.875443  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.875451  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.875607  106048 wrap.go:47] GET /healthz: (383.7µs) 500
goroutine 29308 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011a9e0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011a9e0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f589560, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116eddf0, 0xc01191d200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116eddf0, 0xc011ab2400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d0c00, 0xc00e689ac0, 0x75f60a0, 0xc0116eddf0, 0xc011ab2400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.885258  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.885290  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.885302  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.885310  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.885635  106048 wrap.go:47] GET /healthz: (506.903µs) 500
goroutine 29310 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011a9e1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011a9e1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f589600, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116eddf8, 0xc01191d680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2800)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116eddf8, 0xc011ab2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d0cc0, 0xc00e689ac0, 0x75f60a0, 0xc0116eddf8, 0xc011ab2800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.975356  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.975397  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.975409  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.975416  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.975590  106048 wrap.go:47] GET /healthz: (380.178µs) 500
goroutine 29312 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011a9e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011a9e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f5896a0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116ede00, 0xc01191db00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2c00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116ede00, 0xc011ab2c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d0d80, 0xc00e689ac0, 0x75f60a0, 0xc0116ede00, 0xc011ab2c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:27.985188  106048 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0320 23:44:27.985222  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:27.985235  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:27.985243  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:27.985575  106048 wrap.go:47] GET /healthz: (533.109µs) 500
goroutine 29359 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01095dc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01095dc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f61cdc0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc01197e358, 0xc011a5f500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc01197e358, 0xc011a2dc00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc01197e358, 0xc011a2dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011a74720, 0xc00e689ac0, 0x75f60a0, 0xc01197e358, 0xc011a2dc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:27.998971  106048 client.go:352] parsed scheme: ""
I0320 23:44:27.999006  106048 client.go:352] scheme "" not registered, fallback to default scheme
I0320 23:44:27.999069  106048 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0320 23:44:27.999159  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:27.999608  106048 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0320 23:44:27.999672  106048 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0320 23:44:28.076845  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.076877  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:28.076887  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:28.077066  106048 wrap.go:47] GET /healthz: (1.51433ms) 500
goroutine 29426 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011a9e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011a9e380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f589a20, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0116ede28, 0xc008c829a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3200)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0116ede28, 0xc011ab3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0118d0fc0, 0xc00e689ac0, 0x75f60a0, 0xc0116ede28, 0xc011ab3200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42580]
I0320 23:44:28.086048  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.086112  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:28.086122  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:28.086285  106048 wrap.go:47] GET /healthz: (1.241015ms) 500
goroutine 29415 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b471f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b471f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60edc0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc004173e80, 0xc0020734a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc004173e80, 0xc011b92100)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc004173e80, 0xc011b92100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011b623c0, 0xc00e689ac0, 0x75f60a0, 0xc004173e80, 0xc011b92100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:28.176880  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.737997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:28.177561  106048 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (3.232414ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.177836  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.177857  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:28.177866  106048 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0320 23:44:28.178016  106048 wrap.go:47] GET /healthz: (1.516866ms) 500
goroutine 29422 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010b475e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010b475e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60f8c0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc004173f00, 0xc002073760, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc004173f00, 0xc011b93000)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc004173f00, 0xc011b93000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011b62d20, 0xc00e689ac0, 0x75f60a0, 0xc004173f00, 0xc011b93000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42590]
I0320 23:44:28.178336  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.727346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.178476  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.05465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42580]
I0320 23:44:28.180398  106048 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.652823ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42590]
I0320 23:44:28.180604  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.340103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.181738  106048 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.749886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.182814  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.589831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42590]
I0320 23:44:28.182999  106048 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000
I0320 23:44:28.183111  106048 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.964224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.184903  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.397007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42590]
I0320 23:44:28.185041  106048 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.797058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.186093  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.186117  106048 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0320 23:44:28.186148  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (903.384µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42590]
I0320 23:44:28.186269  106048 wrap.go:47] GET /healthz: (1.131404ms) 500
goroutine 29390 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011b96540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011b96540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f691500, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00ee3d3a0, 0xc00d76a2c0, 0x14b, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011becf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011bece00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00ee3d3a0, 0xc011bece00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0119d8cc0, 0xc00e689ac0, 0x75f60a0, 0xc00ee3d3a0, 0xc011bece00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.186671  106048 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.128211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.186881  106048 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000
I0320 23:44:28.186901  106048 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0320 23:44:28.187347  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (805.871µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42590]
I0320 23:44:28.188467  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (784.746µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.189670  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (865.964µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.190897  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (932.651µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.193109  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.824272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.193481  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0320 23:44:28.194442  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (776.386µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.196226  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.466244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.196488  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0320 23:44:28.197481  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (798.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.199161  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.199389  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0320 23:44:28.207740  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (8.153258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.210391  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.602129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.210615  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0320 23:44:28.211767  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (956.341µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.214336  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.674894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.214631  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0320 23:44:28.216831  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.919313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.218714  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.496108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.218931  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0320 23:44:28.219899  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (772.703µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.222209  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.967483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.222382  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0320 23:44:28.223369  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (790.032µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.225390  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.625948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.225588  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0320 23:44:28.226652  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (866.968µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.229208  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.193324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.229589  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0320 23:44:28.231157  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (746.62µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.233679  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.098595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.234106  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0320 23:44:28.235627  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (997.562µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.238898  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.946193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.239113  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0320 23:44:28.243249  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.349023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.245604  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.921267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.246506  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0320 23:44:28.247457  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (764.557µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.250841  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.994457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.251085  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0320 23:44:28.252430  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.022603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.254772  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.952511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.254934  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0320 23:44:28.256706  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.496534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.258588  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.258855  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0320 23:44:28.259857  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (756.013µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.261943  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.654562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.262221  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0320 23:44:28.263188  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (715.773µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.264906  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.410526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.265100  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0320 23:44:28.266320  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.085071ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.268130  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.268307  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0320 23:44:28.269286  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (805.296µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.271661  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.025341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.271876  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0320 23:44:28.272944  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (860.549µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.275011  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.697141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.275227  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0320 23:44:28.275844  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.275985  106048 wrap.go:47] GET /healthz: (996.804µs) 500
goroutine 29523 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011c47b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011c47b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011f5c5c0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011c383c8, 0xc000079040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f33000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011c383c8, 0xc011f32f00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011c383c8, 0xc011f32f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011d54ba0, 0xc00e689ac0, 0x75f60a0, 0xc011c383c8, 0xc011f32f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42588]
I0320 23:44:28.277671  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (2.18311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.279621  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.605582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.279790  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0320 23:44:28.281032  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.04673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.283369  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.00004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.283572  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0320 23:44:28.284489  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (776.887µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.287079  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.259218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.287208  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.287351  106048 wrap.go:47] GET /healthz: (1.811083ms) 500
goroutine 29560 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011f530a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011f530a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011fcbaa0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011fc2170, 0xc00d75d540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011fc2170, 0xc011fde200)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011fc2170, 0xc011fde200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fd2900, 0xc00e689ac0, 0x75f60a0, 0xc011fc2170, 0xc011fde200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.287651  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0320 23:44:28.288738  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (954.54µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.291193  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.167748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.291513  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0320 23:44:28.293501  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.634183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.295452  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.51096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.295605  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0320 23:44:28.296816  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.077772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.298538  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.41903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.298729  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0320 23:44:28.299959  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.023138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.302115  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.778147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.302289  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0320 23:44:28.303655  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.232933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.305590  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.650327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.305768  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0320 23:44:28.306725  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (722.141µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.308528  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.498775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.308776  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0320 23:44:28.309962  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (963.238µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.311880  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.461488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.312074  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0320 23:44:28.313142  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (843.707µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.315497  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.888116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.315912  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0320 23:44:28.317412  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.238137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.319838  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.910301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.320093  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0320 23:44:28.321293  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (993.231µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.323472  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.323816  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0320 23:44:28.324981  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (838.76µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.327037  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.580423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.327380  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0320 23:44:28.328558  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (880.394µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.330578  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.663768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.330835  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0320 23:44:28.331895  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (847.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.334204  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.840949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.334536  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0320 23:44:28.335791  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (984.074µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.338297  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.007759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.338483  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0320 23:44:28.339584  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (981.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.346187  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.715078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.350114  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0320 23:44:28.351229  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (872.91µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.354026  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.118183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.358531  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0320 23:44:28.364999  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (4.867445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.374647  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.507211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.374928  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0320 23:44:28.375942  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.376022  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (874.93µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.376154  106048 wrap.go:47] GET /healthz: (1.141966ms) 500
goroutine 29625 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0121c49a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0121c49a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012234720, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011cde938, 0xc00bffeb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011cde938, 0xc01226a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011cde938, 0xc01204bf00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011cde938, 0xc01204bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012029e60, 0xc00e689ac0, 0x75f60a0, 0xc011cde938, 0xc01204bf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:28.381873  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.493489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.382127  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0320 23:44:28.383228  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (931.568µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.393962  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (10.359554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.394267  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0320 23:44:28.395793  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.396008  106048 wrap.go:47] GET /healthz: (6.597505ms) 500
goroutine 29654 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0121c5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0121c5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0122aa240, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011cdeaa0, 0xc000079900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b600)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011cdeaa0, 0xc01226b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01226f2c0, 0xc00e689ac0, 0x75f60a0, 0xc011cdeaa0, 0xc01226b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.397167  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (2.71045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.400459  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.881118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42588]
I0320 23:44:28.400706  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0320 23:44:28.402177  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.152794ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.404797  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.308969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.405261  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0320 23:44:28.406656  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.038877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.408918  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.892597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.409244  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0320 23:44:28.410355  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (947.367µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.412454  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.741094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.412647  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0320 23:44:28.413569  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (779.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.415418  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.565914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.415616  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0320 23:44:28.416818  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.04892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.419245  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.071668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.419450  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0320 23:44:28.420503  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (929.696µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.422174  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.380798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.422337  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0320 23:44:28.423581  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.099379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.425438  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.312318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.425628  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0320 23:44:28.426453  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (655.93µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.427907  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.152861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.428086  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0320 23:44:28.428834  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (624.523µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.435749  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.643636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.436750  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0320 23:44:28.458280  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.595572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.477793  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.413653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.478212  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0320 23:44:28.479687  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.479853  106048 wrap.go:47] GET /healthz: (3.290854ms) 500
goroutine 29733 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01214f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01214f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0122452c0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc01197ef98, 0xc00bffef00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc01197ef98, 0xc01242c200)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc01197ef98, 0xc01242c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01217f740, 0xc00e689ac0, 0x75f60a0, 0xc01197ef98, 0xc01242c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:28.485740  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.485900  106048 wrap.go:47] GET /healthz: (921.029µs) 500
goroutine 29662 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0121c5880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0121c5880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0122abe60, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011cdebb0, 0xc00ac6ca00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011cdebb0, 0xc012430600)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011cdebb0, 0xc012430600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01242e540, 0xc00e689ac0, 0x75f60a0, 0xc011cdebb0, 0xc012430600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.495154  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.059152ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.516718  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.549856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.516968  106048 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0320 23:44:28.535390  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.130271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.556554  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.328158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.556781  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0320 23:44:28.577415  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.577608  106048 wrap.go:47] GET /healthz: (2.506287ms) 500
goroutine 29612 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01205be30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01205be30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01223d700, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011c38968, 0xc011972280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011c38968, 0xc00fd38400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011c38968, 0xc00fd38400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0121ef620, 0xc00e689ac0, 0x75f60a0, 0xc011c38968, 0xc00fd38400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:28.577620  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (3.022983ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.585974  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.586171  106048 wrap.go:47] GET /healthz: (1.22515ms) 500
goroutine 29723 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0123d5500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0123d5500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0123f5f40, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011fc2750, 0xc00d75da40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011fc2750, 0xc012406c00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011fc2750, 0xc012406c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0123e1c20, 0xc00e689ac0, 0x75f60a0, 0xc011fc2750, 0xc012406c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.601785  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.377994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.603287  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0320 23:44:28.615801  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.543219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.636596  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.300247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.636834  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0320 23:44:28.655735  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.393583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.680150  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.066479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.680288  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.680411  106048 wrap.go:47] GET /healthz: (3.602391ms) 500
goroutine 29746 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01250c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01250c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01253c000, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc012276818, 0xc012550000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc012276818, 0xc01250b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc012276818, 0xc01250b400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc012276818, 0xc01250b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0124a5020, 0xc00e689ac0, 0x75f60a0, 0xc012276818, 0xc01250b400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:28.680714  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0320 23:44:28.686838  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.686975  106048 wrap.go:47] GET /healthz: (1.318837ms) 500
goroutine 29614 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012502230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012502230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0125262a0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011c38a08, 0xc0122b23c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39200)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011c38a08, 0xc00fd39200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0121efda0, 0xc00e689ac0, 0x75f60a0, 0xc011c38a08, 0xc00fd39200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.710556  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (14.323807ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.737967  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (23.203266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.739615  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0320 23:44:28.741373  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.386094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.772741  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.581368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.776738  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.776949  106048 wrap.go:47] GET /healthz: (1.78089ms) 500
goroutine 29782 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0125485b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0125485b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012531440, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc011fc2aa0, 0xc007fb1180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012537000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc011fc2aa0, 0xc012536f00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc011fc2aa0, 0xc012536f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0124f9140, 0xc00e689ac0, 0x75f60a0, 0xc011fc2aa0, 0xc012536f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:28.777186  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0320 23:44:28.778596  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.141037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.797252  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.797439  106048 wrap.go:47] GET /healthz: (2.449486ms) 500
goroutine 29616 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027bc000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027bc000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc006078500, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d2028, 0xc002c98780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d2028, 0xc005578c00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d2028, 0xc005578c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00804c300, 0xc00e689ac0, 0x75f60a0, 0xc0022d2028, 0xc005578c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:28.797923  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.725548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.798178  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0320 23:44:28.815730  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.450681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.837575  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.256235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.837822  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0320 23:44:28.856377  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.98307ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.880850  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.56388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.881019  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.881263  106048 wrap.go:47] GET /healthz: (6.157382ms) 500
goroutine 29813 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027bcaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027bcaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f8be5e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d22a0, 0xc003b90a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694600)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d22a0, 0xc00d694600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00804d020, 0xc00e689ac0, 0x75f60a0, 0xc0022d22a0, 0xc00d694600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:28.881596  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0320 23:44:28.885898  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.886101  106048 wrap.go:47] GET /healthz: (1.115385ms) 500
goroutine 29737 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f8a27e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f8a27e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002f778c0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b92168, 0xc0003b2dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b92168, 0xc00e24ec00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b92168, 0xc00e24ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f7604e0, 0xc00e689ac0, 0x75f60a0, 0xc000b92168, 0xc00e24ec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.895286  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.122352ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.916742  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.471698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.917007  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0320 23:44:28.935662  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.380554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.966187  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.54859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.966462  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0320 23:44:28.976440  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.163274ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:28.977328  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.977561  106048 wrap.go:47] GET /healthz: (1.578694ms) 500
goroutine 29692 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0007a7180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0007a7180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f98a760, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0028ae658, 0xc002687cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0028ae658, 0xc00b38e800)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0028ae658, 0xc00b38e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e4fa480, 0xc00e689ac0, 0x75f60a0, 0xc0028ae658, 0xc00b38e800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:28.985869  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:28.986045  106048 wrap.go:47] GET /healthz: (1.062101ms) 500
goroutine 29819 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027bd650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027bd650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f8bf9e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d2548, 0xc00110cf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8a00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d2548, 0xc00a6e8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f55c960, 0xc00e689ac0, 0x75f60a0, 0xc0022d2548, 0xc00a6e8a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.001449  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.373851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.001731  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0320 23:44:29.016633  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.35344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.036441  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.145874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.036670  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0320 23:44:29.055664  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.358724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.078011  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.078295  106048 wrap.go:47] GET /healthz: (3.27492ms) 500
goroutine 29751 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f892460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f892460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f754d20, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00eb24200, 0xc004dbe3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfcb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00eb24200, 0xc00acfca00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00eb24200, 0xc00acfca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e26a6c0, 0xc00e689ac0, 0x75f60a0, 0xc00eb24200, 0xc00acfca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:29.078455  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.006888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.078763  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0320 23:44:29.085770  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.085963  106048 wrap.go:47] GET /healthz: (1.052579ms) 500
goroutine 29772 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027456c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027456c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fbb2800, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc005e72418, 0xc00110d400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc005e72418, 0xc009665400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc005e72418, 0xc009665300)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc005e72418, 0xc009665300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f5f2f00, 0xc00e689ac0, 0x75f60a0, 0xc005e72418, 0xc009665300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.095361  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.153001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.116454  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.163047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.116769  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0320 23:44:29.135685  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.440181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.156816  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.524061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.157091  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0320 23:44:29.175722  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.49425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.176285  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.176463  106048 wrap.go:47] GET /healthz: (1.041725ms) 500
goroutine 29755 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f893180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f893180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f755ca0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00eb24420, 0xc00110d7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfdd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00eb24420, 0xc00acfd900)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00eb24420, 0xc00acfd900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e26b2c0, 0xc00e689ac0, 0x75f60a0, 0xc00eb24420, 0xc00acfd900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:29.190353  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.190560  106048 wrap.go:47] GET /healthz: (5.558018ms) 500
goroutine 29844 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f718000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f718000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f9477e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b92a88, 0xc00110db80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b92a88, 0xc007e9f700)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b92a88, 0xc007e9f700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f761da0, 0xc00e689ac0, 0x75f60a0, 0xc000b92a88, 0xc007e9f700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.196241  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.196671  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0320 23:44:29.215541  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.30992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.236599  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.400201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.236842  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0320 23:44:29.255621  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.377568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.288345  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.103995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.288614  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0320 23:44:29.289738  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.289892  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.289912  106048 wrap.go:47] GET /healthz: (9.670371ms) 500
goroutine 29825 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027bdd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027bdd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f960a40, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d29c8, 0xc0029eac80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f300)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d29c8, 0xc00815f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f55d740, 0xc00e689ac0, 0x75f60a0, 0xc0022d29c8, 0xc00815f300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:29.290086  106048 wrap.go:47] GET /healthz: (1.572734ms) 500
goroutine 29841 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00fa33b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00fa33b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f8249e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00116a868, 0xc004dbe780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00116a868, 0xc004855a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00116a868, 0xc004855900)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00116a868, 0xc004855900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bd00600, 0xc00e689ac0, 0x75f60a0, 0xc00116a868, 0xc004855900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42746]
I0320 23:44:29.295368  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.1882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.316325  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.065401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.316684  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0320 23:44:29.335539  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.243429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.356309  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.03224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.356729  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0320 23:44:29.375513  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.24051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.376474  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.376644  106048 wrap.go:47] GET /healthz: (1.142293ms) 500
goroutine 29883 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f558cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f558cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f770400, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d2d18, 0xc0029eb7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fed00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d2d18, 0xc0028fed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b08cf60, 0xc00e689ac0, 0x75f60a0, 0xc0022d2d18, 0xc0028fed00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:29.386152  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.386368  106048 wrap.go:47] GET /healthz: (1.038079ms) 500
goroutine 29864 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f97cfc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f97cfc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f857e00, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc005e72758, 0xc000078b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc005e72758, 0xc0058e3a00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc005e72758, 0xc0058e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c5a52c0, 0xc00e689ac0, 0x75f60a0, 0xc005e72758, 0xc0058e3a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.398195  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.038256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.398445  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0320 23:44:29.415590  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.331026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.436643  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.323924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.436874  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0320 23:44:29.455716  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.42074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.475972  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.476194  106048 wrap.go:47] GET /healthz: (1.099172ms) 500
goroutine 29854 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f719f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f719f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f75b900, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b92fd8, 0xc000079040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b92fd8, 0xc008bac000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b92fd8, 0xc003b6bd00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b92fd8, 0xc003b6bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dcbd680, 0xc00e689ac0, 0x75f60a0, 0xc000b92fd8, 0xc003b6bd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:29.476369  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.117556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.476530  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0320 23:44:29.486494  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.486695  106048 wrap.go:47] GET /healthz: (1.580584ms) 500
goroutine 29856 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f6d0070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f6d0070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f75bb00, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b92fe8, 0xc004dbec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b92fe8, 0xc008bac400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00dcbd920, 0xc00e689ac0, 0x75f60a0, 0xc000b92fe8, 0xc008bac400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.495148  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.016842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.525980  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.547642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.526256  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0320 23:44:29.536192  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.989671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.555943  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.696414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.556172  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0320 23:44:29.575604  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.346062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.575776  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.575954  106048 wrap.go:47] GET /healthz: (855.57µs) 500
goroutine 29908 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f6d0620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f6d0620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f69ca60, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b930d8, 0xc000079a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b930d8, 0xc008bad400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b930d8, 0xc008bad400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a71e420, 0xc00e689ac0, 0x75f60a0, 0xc000b930d8, 0xc008bad400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:29.585878  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.586133  106048 wrap.go:47] GET /healthz: (1.070226ms) 500
goroutine 29910 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f6d0700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f6d0700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f69ccc0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b93110, 0xc0029ebb80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b93110, 0xc0092fa000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b93110, 0xc008badb00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b93110, 0xc008badb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a71f3e0, 0xc00e689ac0, 0x75f60a0, 0xc000b93110, 0xc008badb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.595952  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.7492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.596269  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0320 23:44:29.615403  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.124627ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.639954  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.525844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.640215  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0320 23:44:29.655688  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.308324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.679272  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.575203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.679400  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.679601  106048 wrap.go:47] GET /healthz: (3.126077ms) 500
goroutine 29943 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f610d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f610d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60e700, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00eb24d90, 0xc00396e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db300)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00eb24d90, 0xc0093db300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ae09560, 0xc00e689ac0, 0x75f60a0, 0xc00eb24d90, 0xc0093db300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:29.679888  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0320 23:44:29.686514  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.686676  106048 wrap.go:47] GET /healthz: (962.304µs) 500
goroutine 29866 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f97dea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f97dea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f73d160, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc005e728b0, 0xc004dbf400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc005e728b0, 0xc00c124400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc005e728b0, 0xc00c124400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c5a5bc0, 0xc00e689ac0, 0x75f60a0, 0xc005e728b0, 0xc00c124400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.696889  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.537535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.717347  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.06296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.717599  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0320 23:44:29.735224  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.03327ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.756163  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.864944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.756625  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0320 23:44:29.775384  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.117494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.780964  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.781178  106048 wrap.go:47] GET /healthz: (2.508661ms) 500
goroutine 29945 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f610e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f610e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f60e960, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00eb24e08, 0xc009254780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093dba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00eb24e08, 0xc0093db900)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00eb24e08, 0xc0093db900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ae098c0, 0xc00e689ac0, 0x75f60a0, 0xc00eb24e08, 0xc0093db900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:29.786647  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.786853  106048 wrap.go:47] GET /healthz: (1.848694ms) 500
goroutine 29902 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f4247e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f4247e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f601360, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d3130, 0xc009254c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d3130, 0xc006efd600)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d3130, 0xc006efd600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0097cd200, 0xc00e689ac0, 0x75f60a0, 0xc0022d3130, 0xc006efd600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.796473  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.282799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.796930  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0320 23:44:29.815749  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.485456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.836276  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.968953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.836535  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0320 23:44:29.855286  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.07087ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.876720  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.876921  106048 wrap.go:47] GET /healthz: (957.409µs) 500
goroutine 29936 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f56b730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f56b730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f5ed280, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00116ad30, 0xc004dbfb80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab200)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00116ad30, 0xc0074ab200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a2d0780, 0xc00e689ac0, 0x75f60a0, 0xc00116ad30, 0xc0074ab200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:29.877673  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.46944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.877873  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0320 23:44:29.886491  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.886701  106048 wrap.go:47] GET /healthz: (1.669819ms) 500
goroutine 29954 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f4251f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f4251f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f500600, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d3298, 0xc00d75c140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d100)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d3298, 0xc006e8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0090a5bc0, 0xc00e689ac0, 0x75f60a0, 0xc0022d3298, 0xc006e8d100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.895411  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.160059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.916662  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.239971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.917015  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0320 23:44:29.935500  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.217731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.957290  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.013742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.957534  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0320 23:44:29.975556  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.239336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:29.975846  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.976007  106048 wrap.go:47] GET /healthz: (878.311µs) 500
goroutine 29964 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f425c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f425c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4d4640, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d3428, 0xc00d75c640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9100)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d3428, 0xc0070d9100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00799aae0, 0xc00e689ac0, 0x75f60a0, 0xc0022d3428, 0xc0070d9100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:29.986242  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:29.986447  106048 wrap.go:47] GET /healthz: (1.452093ms) 500
goroutine 29953 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f611f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f611f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f52f9c0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00eb25670, 0xc00bffe280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf700)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00eb25670, 0xc006ebf700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0093d1860, 0xc00e689ac0, 0x75f60a0, 0xc00eb25670, 0xc006ebf700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.996044  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.823863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:29.996402  106048 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0320 23:44:30.015644  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.438411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.017437  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.282467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.040621  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.198665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.040878  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0320 23:44:30.056560  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.283411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.058367  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.337845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.076000  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.715505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.076177  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.076269  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0320 23:44:30.076361  106048 wrap.go:47] GET /healthz: (1.113126ms) 500
goroutine 30021 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f288540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f288540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4d5cc0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d3550, 0xc00d75ca00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d3550, 0xc004180300)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d3550, 0xc004180300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00598f620, 0xc00e689ac0, 0x75f60a0, 0xc0022d3550, 0xc004180300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:30.085832  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.086016  106048 wrap.go:47] GET /healthz: (1.058496ms) 500
goroutine 29973 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f419730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f419730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f4f36e0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc005e72c00, 0xc0092552c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc005e72c00, 0xc007004400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc005e72c00, 0xc007004400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008cadc20, 0xc00e689ac0, 0x75f60a0, 0xc005e72c00, 0xc007004400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.095253  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.094522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.096879  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.210394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.116809  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.510642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.117114  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0320 23:44:30.135543  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.253681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.137294  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.235472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.156170  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.969747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.156621  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0320 23:44:30.175428  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.130107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.176218  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.176398  106048 wrap.go:47] GET /healthz: (962.919µs) 500
goroutine 29921 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f6d1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f6d1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f386100, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b93568, 0xc00d75cf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b93568, 0xc002f5d300)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b93568, 0xc002f5d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0055250e0, 0xc00e689ac0, 0x75f60a0, 0xc000b93568, 0xc002f5d300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:30.177040  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.246109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.185873  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.186039  106048 wrap.go:47] GET /healthz: (1.054385ms) 500
goroutine 30035 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f6d1dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f6d1dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f3865a0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b935e8, 0xc00396f180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a700)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b935e8, 0xc003a1a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005525980, 0xc00e689ac0, 0x75f60a0, 0xc000b935e8, 0xc003a1a700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.196836  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.592883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.197164  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0320 23:44:30.216147  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.855062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.218226  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.511925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.236633  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.384997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.236922  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0320 23:44:30.255481  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.245366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.257307  106048 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.441887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.278372  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (4.106023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.278520  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.278665  106048 wrap.go:47] GET /healthz: (3.577645ms) 500
goroutine 30033 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f289ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f289ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f323be0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d3860, 0xc009255900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2500)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d3860, 0xc0026d2500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0051559e0, 0xc00e689ac0, 0x75f60a0, 0xc0022d3860, 0xc0026d2500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:30.278982  106048 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0320 23:44:30.285900  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.286100  106048 wrap.go:47] GET /healthz: (1.057007ms) 500
goroutine 30051 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f289f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f289f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f323de0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc0022d3870, 0xc003b91040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2a00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc0022d3870, 0xc0026d2a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005155e00, 0xc00e689ac0, 0x75f60a0, 0xc0022d3870, 0xc0026d2a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.295439  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.191227ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.297683  106048 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.723514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.316739  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.515256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.316953  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0320 23:44:30.335405  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.175509ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.337212  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.391112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.356627  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.370736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.356893  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0320 23:44:30.376358  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.867935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.378388  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.378557  106048 wrap.go:47] GET /healthz: (2.743694ms) 500
goroutine 30073 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ee98230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ee98230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f215ac0, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00116b6b8, 0xc002c99540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00116b6b8, 0xc002419100)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00116b6b8, 0xc002419100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0049cd680, 0xc00e689ac0, 0x75f60a0, 0xc00116b6b8, 0xc002419100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42626]
I0320 23:44:30.379015  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.226363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.386127  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.386286  106048 wrap.go:47] GET /healthz: (1.139822ms) 500
goroutine 30039 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f1a41c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f1a41c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f387100, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc000b93680, 0xc0003b3180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc000b93680, 0xc003a1b900)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc000b93680, 0xc003a1b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b964ea0, 0xc00e689ac0, 0x75f60a0, 0xc000b93680, 0xc003a1b900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.399185  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.669609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.399485  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0320 23:44:30.415269  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.035382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.416947  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.296486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.436506  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.269847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.436778  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0320 23:44:30.455614  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.343538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.457521  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.432516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.477220  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.477401  106048 wrap.go:47] GET /healthz: (2.13915ms) 500
goroutine 30098 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f1b9f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f1b9f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f168d40, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc005e72fd0, 0xc0003b3680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cfe00)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc005e72fd0, 0xc0033cfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aa14c00, 0xc00e689ac0, 0x75f60a0, 0xc005e72fd0, 0xc0033cfe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:42584]
I0320 23:44:30.477928  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.607465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.478133  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0320 23:44:30.486117  106048 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0320 23:44:30.486264  106048 wrap.go:47] GET /healthz: (1.287966ms) 500
goroutine 30078 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ee98fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ee98fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00f1ad500, 0x1f4)
net/http.Error(0x7faa05b1cab8, 0xc00116b8e0, 0xc002c99900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
net/http.HandlerFunc.ServeHTTP(0xc00f52f920, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0118dc440, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00dd901c0, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x453750f, 0xe, 0xc00e6e6bd0, 0xc00dd901c0, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb8c0, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
net/http.HandlerFunc.ServeHTTP(0xc00ed72870, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
net/http.HandlerFunc.ServeHTTP(0xc00f2cb900, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7400)
net/http.HandlerFunc.ServeHTTP(0xc00e661cc0, 0x7faa05b1cab8, 0xc00116b8e0, 0xc000be7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b390480, 0xc00e689ac0, 0x75f60a0, 0xc00116b8e0, 0xc000be7400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.495262  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.092444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.496995  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.369693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.516627  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.092761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.516922  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0320 23:44:30.549811  106048 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (15.559247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.558476  106048 wrap.go:47] GET /api/v1/namespaces/kube-system: (7.880449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.561261  106048 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.316819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.561489  106048 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0320 23:44:30.577191  106048 wrap.go:47] GET /healthz: (1.723825ms) 200 [Go-http-client/1.1 127.0.0.1:42626]
W0320 23:44:30.577957  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578023  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578069  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578081  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578094  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578106  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578116  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578130  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578144  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0320 23:44:30.578155  106048 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0320 23:44:30.578232  106048 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0320 23:44:30.578243  106048 factory.go:412] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0320 23:44:30.578469  106048 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0320 23:44:30.578695  106048 reflector.go:123] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0320 23:44:30.578714  106048 reflector.go:161] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:211
I0320 23:44:30.579818  106048 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (671.512µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42626]
I0320 23:44:30.580578  106048 get.go:251] Starting watch for /api/v1/pods, rv=22216 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m56s
I0320 23:44:30.586243  106048 wrap.go:47] GET /healthz: (1.118382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.588469  106048 wrap.go:47] GET /api/v1/namespaces/default: (1.933047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.590519  106048 wrap.go:47] POST /api/v1/namespaces: (1.696297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.591860  106048 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.059271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.595662  106048 wrap.go:47] POST /api/v1/namespaces/default/services: (3.452839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.596810  106048 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (848.12µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.598438  106048 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.3235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.678657  106048 shared_informer.go:123] caches populated
I0320 23:44:30.678688  106048 controller_utils.go:1034] Caches are synced for scheduler controller
I0320 23:44:30.679015  106048 reflector.go:123] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.679033  106048 reflector.go:161] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.679390  106048 reflector.go:123] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.679402  106048 reflector.go:161] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.679799  106048 reflector.go:123] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.679814  106048 reflector.go:161] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680095  106048 reflector.go:123] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680106  106048 reflector.go:161] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680387  106048 reflector.go:123] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680398  106048 reflector.go:161] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680667  106048 reflector.go:123] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680676  106048 reflector.go:161] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680967  106048 reflector.go:123] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.680977  106048 reflector.go:161] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.681277  106048 reflector.go:123] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.681290  106048 reflector.go:161] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.681584  106048 reflector.go:123] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.681595  106048 reflector.go:161] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0320 23:44:30.682912  106048 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (513.229µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42584]
I0320 23:44:30.682951  106048 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (420.805µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42960]
I0320 23:44:30.683448  106048 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=22217 labels= fields= timeout=8m44s
I0320 23:44:30.683518  106048 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (436.95µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42946]
I0320 23:44:30.683922  106048 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (321.747µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42948]
I0320 23:44:30.684326  106048 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (327.035µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42950]
I0320 23:44:30.684601  106048 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (365.108µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42952]
I0320 23:44:30.684605  106048 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=22216 labels= fields= timeout=9m52s
I0320 23:44:30.684889  106048 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=22216 labels= fields= timeout=9m49s
I0320 23:44:30.684995  106048 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (310.869µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42954]
I0320 23:44:30.685550  106048 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=22217 labels= fields= timeout=5m9s
I0320 23:44:30.685685  106048 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=22217 labels= fields= timeout=8m16s
I0320 23:44:30.685963  106048 get.go:251] Starting watch for /api/v1/nodes, rv=22216 labels= fields= timeout=6m15s
I0320 23:44:30.686173  106048 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (341.777µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42958]
I0320 23:44:30.686320  106048 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=22217 labels= fields= timeout=6m35s
I0320 23:44:30.686832  106048 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=22216 labels= fields= timeout=7m11s
I0320 23:44:30.687406  106048 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (499.167µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42956]
I0320 23:44:30.688116  106048 get.go:251] Starting watch for /api/v1/services, rv=22482 labels= fields= timeout=8m15s
I0320 23:44:30.778953  106048 shared_informer.go:123] caches populated
I0320 23:44:30.883129  106048 shared_informer.go:123] caches populated
I0320 23:44:30.983428  106048 shared_informer.go:123] caches populated
I0320 23:44:31.083634  106048 shared_informer.go:123] caches populated
I0320 23:44:31.183956  106048 shared_informer.go:123] caches populated
I0320 23:44:31.284124  106048 shared_informer.go:123] caches populated
I0320 23:44:31.384363  106048 shared_informer.go:123] caches populated
I0320 23:44:31.484623  106048 shared_informer.go:123] caches populated
I0320 23:44:31.584775  106048 shared_informer.go:123] caches populated
I0320 23:44:31.683387  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:31.684093  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:31.684281  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:31.685009  106048 shared_informer.go:123] caches populated
I0320 23:44:31.685595  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:31.687957  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:31.688484  106048 wrap.go:47] POST /api/v1/nodes: (2.536123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.692867  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.702979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.693470  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0
I0320 23:44:31.693490  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0
I0320 23:44:31.694496  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0", node "node1"
I0320 23:44:31.694541  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0320 23:44:31.694619  106048 factory.go:733] Attempting to bind rpod-0 to node1
I0320 23:44:31.696332  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.570602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.696394  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1
I0320 23:44:31.696407  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1
I0320 23:44:31.696503  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1", node "node1"
I0320 23:44:31.696525  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0320 23:44:31.696568  106048 factory.go:733] Attempting to bind rpod-1 to node1
I0320 23:44:31.697655  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0/binding: (1.861419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.697834  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:31.698284  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1/binding: (1.490176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.698430  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:31.699595  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.488672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.701451  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.343617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.801601  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (4.254711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.904562  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (1.697703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.904888  106048 preemption_test.go:561] Creating the preemptor pod...
I0320 23:44:31.908686  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.030215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.909450  106048 preemption_test.go:567] Creating additional pods...
I0320 23:44:31.909460  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:31.909479  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:31.909586  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.909630  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.912580  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.150691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.912763  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.105655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42972]
I0320 23:44:31.912778  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.274697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43052]
I0320 23:44:31.913162  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/status: (2.772229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.914805  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.254083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.914876  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.472035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43052]
I0320 23:44:31.915012  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.917076  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/status: (1.692234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.917104  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.771594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.919626  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.115628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.925393  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (7.852644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.925664  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:31.925683  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:31.925801  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.925842  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.925998  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (5.978027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.928434  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.552741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.928596  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (2.278279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.928618  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.187949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43080]
I0320 23:44:31.929100  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0/status: (2.883469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.932214  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.101385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43080]
I0320 23:44:31.932627  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (3.180977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.932898  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.933361  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.588609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.933820  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:31.933840  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:31.933935  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.933976  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.935305  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.080497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.935353  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.040188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.935774  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.123187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43090]
I0320 23:44:31.936745  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1/status: (2.496736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42970]
I0320 23:44:31.938304  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.630797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43056]
I0320 23:44:31.939489  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.734841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43090]
I0320 23:44:31.939863  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.940030  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:31.940120  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:31.940227  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.940260  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.942124  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.230762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43098]
I0320 23:44:31.942596  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2/status: (1.725696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.943024  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.641803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.943726  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.304444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43090]
I0320 23:44:31.943991  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.067147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.944218  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.944451  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:31.944465  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:31.944562  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.944599  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.946281  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (1.466544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.946317  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.196909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43090]
I0320 23:44:31.947115  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3/status: (2.313033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.950803  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (3.192129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43078]
I0320 23:44:31.951240  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.299252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43090]
I0320 23:44:31.951935  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.394065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43098]
I0320 23:44:31.952526  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.952711  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:31.952732  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:31.952827  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.952872  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.957128  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (3.512796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43126]
I0320 23:44:31.958705  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4/status: (5.556282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.959264  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.653711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0320 23:44:31.959615  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (6.819971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43098]
I0320 23:44:31.960634  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.277028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.961851  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.962728  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.701275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43130]
I0320 23:44:31.963167  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:31.963337  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:31.963458  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.963497  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.967089  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (2.799382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:31.967544  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.930756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.967951  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (3.722044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43126]
I0320 23:44:31.969165  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.969322  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:31.969338  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:31.969410  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.969458  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.973134  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-0.158dcf634dc004ae: (8.87475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43138]
I0320 23:44:31.973229  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5/status: (3.525693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43126]
I0320 23:44:31.973668  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (5.288472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:31.974037  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (4.175309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.976549  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.997906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43138]
I0320 23:44:31.976730  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (2.364765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43126]
I0320 23:44:31.976951  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.977152  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:31.977171  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:31.977286  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.977324  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.979752  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.373981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:31.980838  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.339607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43150]
I0320 23:44:31.992633  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (13.598637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.993077  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6/status: (14.011487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43138]
I0320 23:44:31.993874  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (13.717342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:31.995080  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.446721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:31.995676  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:31.995920  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:31.995937  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:31.996016  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:31.996075  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:31.997703  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.224985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:31.998452  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.601321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43158]
I0320 23:44:32.000105  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.979467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.000569  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7/status: (4.281986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:32.000771  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (4.24304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43150]
I0320 23:44:32.002127  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.676017ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.002819  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.104042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:32.003040  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.003238  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:32.003272  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:32.003369  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.003416  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.004515  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.002348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.005172  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.417955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43158]
I0320 23:44:32.005824  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8/status: (2.208116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:32.006021  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.197435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.007600  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.437274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43096]
I0320 23:44:32.007928  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.008149  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.193916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43160]
I0320 23:44:32.009105  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:32.009151  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:32.009300  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.009381  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.011387  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.701951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.012546  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (2.817037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43158]
I0320 23:44:32.012734  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (2.449026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.012796  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.012962  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:32.012977  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:32.013097  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.013142  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.014501  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.140321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43158]
I0320 23:44:32.015120  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9/status: (1.795005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.015697  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.944724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.017625  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.51546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.017891  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (2.398623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.018133  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.018263  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:32.018277  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:32.018350  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.018383  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.020465  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.503066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43164]
I0320 23:44:32.020514  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.191992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.021493  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10/status: (2.799791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.022452  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.563033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43164]
I0320 23:44:32.022697  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-2.158dcf634e9c0e8f: (6.346115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43158]
I0320 23:44:32.023097  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.13734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.023357  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.023526  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:32.023581  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:32.023737  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.023809  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.025030  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.992228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43164]
I0320 23:44:32.026320  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11/status: (2.224803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.027337  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.701116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43164]
I0320 23:44:32.027647  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (3.251296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43166]
I0320 23:44:32.028317  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.355168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.028515  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.028686  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:32.028697  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:32.028783  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.028818  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.032436  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12/status: (2.019102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.032513  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.786109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43164]
I0320 23:44:32.033507  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (10.232118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.034493  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (4.095578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43166]
I0320 23:44:32.035299  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (2.291351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.035304  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.801259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43164]
I0320 23:44:32.035650  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.035930  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:32.035947  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:32.036036  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.036090  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.037801  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.208256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43168]
I0320 23:44:32.039759  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.144138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.040228  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13/status: (3.63903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43166]
I0320 23:44:32.040678  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.836097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.042873  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.681821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43168]
I0320 23:44:32.045139  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.045377  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:32.045416  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:32.043390  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.12079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.045609  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.045725  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.043904  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.158236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.047620  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.273043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.048946  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.309285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43162]
I0320 23:44:32.049405  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.485531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43170]
I0320 23:44:32.051739  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.68747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.052252  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.000593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.055411  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.293232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.055994  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.67334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.058417  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.945354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.060732  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14/status: (1.675118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43168]
I0320 23:44:32.060803  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.828774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.062538  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.16013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.062815  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.063096  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.880389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.063156  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:32.063293  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:32.063467  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.063536  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.065712  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.05157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.066607  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (2.269634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43174]
I0320 23:44:32.066607  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15/status: (2.83476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.067643  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.631851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43172]
I0320 23:44:32.069308  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.77715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43176]
I0320 23:44:32.069759  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (2.069631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.069981  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.070158  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:32.070173  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:32.070268  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.070308  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.081652  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (10.74355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43180]
I0320 23:44:32.086359  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16/status: (15.728909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.086673  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (15.420973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43174]
I0320 23:44:32.088539  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (18.589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43178]
I0320 23:44:32.088628  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.774713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43136]
I0320 23:44:32.088899  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.089170  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:32.089222  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:32.089392  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.089450  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.091441  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.193406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0320 23:44:32.091992  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.773156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43190]
I0320 23:44:32.095925  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (6.731539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43174]
I0320 23:44:32.097232  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17/status: (7.540368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43180]
I0320 23:44:32.099088  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.399943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43180]
I0320 23:44:32.099540  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.412088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43190]
I0320 23:44:32.099599  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.099769  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:32.099789  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:32.099869  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.099914  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.102317  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18/status: (1.948332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0320 23:44:32.102555  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.020801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43194]
I0320 23:44:32.102744  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.761925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43180]
I0320 23:44:32.104174  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.507501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0320 23:44:32.104384  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.104575  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:32.104590  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:32.104652  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.473173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43180]
I0320 23:44:32.104693  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.104731  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.106710  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.574703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43194]
I0320 23:44:32.107125  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.80968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43196]
I0320 23:44:32.107733  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19/status: (2.823901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0320 23:44:32.108613  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (6.71048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43192]
I0320 23:44:32.109319  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.787914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43196]
I0320 23:44:32.109744  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (2.508983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43194]
I0320 23:44:32.111567  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (2.445585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43188]
I0320 23:44:32.111583  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.884874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43196]
I0320 23:44:32.112031  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.112245  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:32.112265  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:32.112356  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.112401  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.114805  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.746893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.114997  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (2.215401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.115511  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20/status: (2.800385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43194]
I0320 23:44:32.117725  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.848383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43194]
I0320 23:44:32.118172  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.118385  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:32.118440  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:32.118578  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.118658  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.119955  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.009167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.120870  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.880034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.122861  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21/status: (2.038302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.124645  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.189826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.125121  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.125390  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:32.125448  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:32.125612  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.126342  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.127112  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.121406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.127785  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.084213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.128097  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.128307  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:32.128344  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:32.128470  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.128538  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.131753  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-8.158dcf63525fb03a: (3.389964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.132209  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22/status: (3.335353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.133748  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.385799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.134613  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.339408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43204]
I0320 23:44:32.134834  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.135116  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:32.135161  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:32.135221  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.538929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43198]
I0320 23:44:32.135331  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.135375  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.136916  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.237299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43206]
I0320 23:44:32.137881  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23/status: (2.161761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.139493  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.51161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43208]
I0320 23:44:32.139723  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.222762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43200]
I0320 23:44:32.140046  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.140264  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:32.140284  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:32.140394  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.140450  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.143174  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.298724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43210]
I0320 23:44:32.143718  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24/status: (2.98773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43206]
I0320 23:44:32.144130  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (3.416108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43208]
I0320 23:44:32.145294  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.067897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43206]
I0320 23:44:32.145609  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.145819  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:32.145834  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:32.145900  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.145938  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.147455  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (984.098µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43210]
I0320 23:44:32.148021  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.509948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43212]
I0320 23:44:32.148678  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25/status: (2.515899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43208]
I0320 23:44:32.150028  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (931.227µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43212]
I0320 23:44:32.150399  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.150612  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:32.150634  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:32.150779  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.150823  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.152307  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (929.298µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43210]
I0320 23:44:32.152849  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.468583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.155476  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26/status: (4.436889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43212]
I0320 23:44:32.156867  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.013691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.157147  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.157304  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:32.157318  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:32.157412  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.157469  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.159752  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.74988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43210]
I0320 23:44:32.160161  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27/status: (2.493627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.160286  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.150097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.162031  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.400944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.162457  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.162774  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:32.162794  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:32.162937  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.162990  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.164746  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.112442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.165742  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.876599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.168114  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28/status: (2.418391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.170080  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.453014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.170348  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.170513  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:32.170533  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:32.170607  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.170653  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.173261  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (2.101973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.173877  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.549288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43218]
I0320 23:44:32.174086  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29/status: (3.107495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43214]
I0320 23:44:32.175554  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.003547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43218]
I0320 23:44:32.175816  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.176090  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:32.176110  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:32.176219  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.176295  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.178267  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.693596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.180613  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.825662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43220]
I0320 23:44:32.182135  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30/status: (5.580035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43218]
I0320 23:44:32.184071  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.399235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43220]
I0320 23:44:32.184446  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.184622  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:32.184637  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:32.184821  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.184866  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.188154  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (2.287499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.188517  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (2.563792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43220]
I0320 23:44:32.188809  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.188939  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:32.188954  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:32.189021  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.189083  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.189780  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-12.158dcf6353e34fa5: (3.783744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.190651  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.176158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.192561  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.543209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.193617  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31/status: (4.134615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43220]
I0320 23:44:32.195560  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.270326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.195765  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.195924  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:32.195935  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:32.196018  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.196071  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.200541  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.92216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43226]
I0320 23:44:32.200629  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (4.006899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.200659  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32/status: (3.953436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.202735  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.094253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.203043  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.203236  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:32.203253  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:32.203322  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.203363  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.204974  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.010911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43226]
I0320 23:44:32.205404  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33/status: (1.812575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.205439  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.447609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43228]
I0320 23:44:32.206852  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (988.519µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.207099  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.208143  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:32.208166  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:32.208272  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.208310  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.216337  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.684858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.217005  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (2.085723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43228]
I0320 23:44:32.217634  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.221302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.217715  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34/status: (2.471089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43216]
I0320 23:44:32.219384  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.234487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43228]
I0320 23:44:32.219684  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.219864  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:32.219887  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:32.220010  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.220093  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.222046  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35/status: (1.660614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.222575  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.019243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.223882  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.253225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.224346  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.224576  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:32.224613  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:32.224740  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.224801  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.225009  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.323289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.227170  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.872762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.227186  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36/status: (2.084909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.229306  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.263726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43224]
I0320 23:44:32.229860  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (2.142828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.230115  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.230285  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:32.230308  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:32.230400  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.230451  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.232984  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.942267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0320 23:44:32.233983  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37/status: (3.332177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.235658  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.141092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.235965  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.236235  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:32.236257  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:32.236432  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.236519  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.238566  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38/status: (1.791826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.238992  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.625103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43236]
I0320 23:44:32.239210  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.739516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0320 23:44:32.240039  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.108723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43230]
I0320 23:44:32.240667  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.240906  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:32.240931  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:32.241009  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.241073  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.243368  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.563615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43238]
I0320 23:44:32.243542  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (2.175376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43236]
I0320 23:44:32.243935  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39/status: (2.423482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0320 23:44:32.245762  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.359375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0320 23:44:32.246013  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.246558  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.446129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43238]
I0320 23:44:32.248553  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:32.248608  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:32.248750  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.248826  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.252107  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.184961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43240]
I0320 23:44:32.252508  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40/status: (2.858005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0320 23:44:32.254789  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.101421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43242]
I0320 23:44:32.256176  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (2.812855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43234]
I0320 23:44:32.256773  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.256974  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:32.256993  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:32.257137  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.257247  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.260251  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.083628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43244]
I0320 23:44:32.260782  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (3.1556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43242]
I0320 23:44:32.260784  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41/status: (2.64696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43240]
I0320 23:44:32.262533  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.250082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43242]
I0320 23:44:32.262978  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.263201  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:32.263232  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:32.263337  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.263385  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.266637  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43246]
I0320 23:44:32.267242  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (3.50909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43244]
I0320 23:44:32.267534  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42/status: (3.571226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43242]
I0320 23:44:32.269532  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.186457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43244]
I0320 23:44:32.269841  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.270030  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:32.270065  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:32.270174  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.270222  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.271929  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.254699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43244]
I0320 23:44:32.272490  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.273741  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.178351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43244]
I0320 23:44:32.273851  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-16.158dcf63565c5e61: (2.660169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43246]
I0320 23:44:32.274205  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:32.274235  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:32.274338  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.274378  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.281868  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.678939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.283146  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.726771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43250]
I0320 23:44:32.283314  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43/status: (3.038606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43246]
I0320 23:44:32.285080  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.210503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43246]
I0320 23:44:32.285450  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.285783  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:32.285836  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:32.285973  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.286036  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.289442  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.43433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0320 23:44:32.291252  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44/status: (4.067486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43250]
I0320 23:44:32.291745  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (4.910044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.294229  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.20932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.294524  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.295489  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:32.295574  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:32.295724  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.295806  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.297688  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.530079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.300337  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.446194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0320 23:44:32.300811  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45/status: (2.794849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43252]
I0320 23:44:32.303446  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (2.03361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0320 23:44:32.303738  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.303985  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:32.304002  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:32.304124  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.304175  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.305891  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.52208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0320 23:44:32.306262  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.306525  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:32.306578  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:32.306778  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.306864  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.307474  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-18.158dcf6358202230: (2.68923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.308524  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.595193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0320 23:44:32.312741  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46/status: (3.776683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43260]
I0320 23:44:32.314916  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.191456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.315247  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (1.72218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43256]
I0320 23:44:32.315693  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (8.31544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43258]
I0320 23:44:32.316278  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.316442  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:32.316500  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:32.316677  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.316728  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.318534  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47/status: (1.605458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.320020  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (2.225418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0320 23:44:32.321588  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.7473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.324199  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (2.928197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43254]
I0320 23:44:32.324482  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.324642  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:32.324662  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:32.324741  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (4.341952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.324772  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.324810  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.326270  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (1.160563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.326614  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (1.484062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43262]
I0320 23:44:32.326907  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.327091  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:32.327131  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:32.327311  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.327380  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.328045  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-19.158dcf635869a569: (2.679368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.328800  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.035682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.329493  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48/status: (1.709221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43262]
I0320 23:44:32.329872  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.339476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.331087  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.233503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43262]
I0320 23:44:32.331698  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.333282  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:32.333340  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:32.333495  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.333591  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.335628  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.131469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.336732  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49/status: (2.175431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.338128  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.935546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.338957  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.390149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43248]
I0320 23:44:32.339256  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.339502  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:32.339525  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:32.339762  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.339838  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.341365  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.320434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.341739  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.341952  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:32.341971  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:32.342064  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.342103  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.343271  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.004152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.343773  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-20.158dcf6358dead5c: (3.177399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43268]
I0320 23:44:32.344047  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.344281  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:32.344367  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:32.344313  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.14747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43270]
I0320 23:44:32.344645  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.344728  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.346175  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.279356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43268]
I0320 23:44:32.346258  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.307361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.346508  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.346850  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:32.346903  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:32.347032  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.347118  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.348923  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.06537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43268]
I0320 23:44:32.349360  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (2.012796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.349744  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.349904  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:32.349922  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:32.349995  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.350033  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.351279  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-24.158dcf635a8aaa5e: (5.49659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.352441  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (2.065982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43268]
I0320 23:44:32.352867  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (2.604714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.353217  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.353944  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:32.353967  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:32.354077  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.354112  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.354793  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (9.318261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.356180  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-27.158dcf635b8e5c04: (2.695315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.356655  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (2.366042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43268]
I0320 23:44:32.357153  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (2.814317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.357661  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.357817  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:32.357828  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:32.357902  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.357936  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.360539  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-29.158dcf635c578611: (3.186881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.363167  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-32.158dcf635ddb20c4: (2.014076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.366368  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-35.158dcf635f4959cd: (2.59363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.369037  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-36.158dcf635f91bd63: (2.130696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.394845  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (36.700458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43264]
I0320 23:44:32.395370  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (37.16253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.395788  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.395977  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:32.395995  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:32.396130  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.396174  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.398663  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.765589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.399235  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (2.859486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.399508  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.399703  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:32.399718  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:32.399814  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.399858  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.400680  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-37.158dcf635fe7fb2b: (3.638325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.401498  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.457777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.401921  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.7683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.402179  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.402328  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:32.402345  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:32.402456  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.402498  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.404477  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-41.158dcf636180c0f0: (2.898967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.404513  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.699636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.404849  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (2.033037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43272]
I0320 23:44:32.405852  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.406273  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:32.406293  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:32.406402  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.406453  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.409089  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-42.158dcf6361de7cc7: (3.072091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.409306  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.905513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.409705  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.409837  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:32.409849  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:32.409935  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.409973  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.410951  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (3.898785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:32.412031  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (1.422722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43278]
I0320 23:44:32.412463  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (1.868812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.412886  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.413171  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:32.413194  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:32.413336  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:32.413389  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:32.414952  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.405754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.415709  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (2.075214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:32.416043  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:32.418630  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-44.158dcf636338217a: (2.31907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43266]
I0320 23:44:32.420534  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.232093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:32.422716  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-47.158dcf63650c728c: (3.503746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.425382  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-48.158dcf6365aea129: (2.103144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.521487  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.107118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.621515  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.097389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.684271  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:32.686000  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:32.686015  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:32.686014  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:32.688119  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:32.722092  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.462627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.821559  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.122975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:32.921404  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.893133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.021388  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.860435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.121655  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.232026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.223744  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.0709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.321447  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.03399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.421575  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.057925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.521731  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.281062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.579874  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:33.579920  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:33.580115  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod", node "node1"
I0320 23:44:33.580134  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0320 23:44:33.580183  106048 factory.go:733] Attempting to bind preemptor-pod to node1
I0320 23:44:33.580711  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:33.580742  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:33.580856  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.580908  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.583221  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/binding: (2.644611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.583411  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:33.585283  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (2.360102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43438]
I0320 23:44:33.585729  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (2.806193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.586724  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.587008  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.587301  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:33.587318  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:33.587429  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.587472  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.589135  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-1.158dcf634e3c2171: (6.210582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.591788  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.154484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.592207  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.942995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.592483  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.592615  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.592772  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:33.592794  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:33.592892  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.592934  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.594580  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.433808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.594981  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.890474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.595247  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.595479  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.595629  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:33.595646  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:33.595758  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.595802  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.600506  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (1.504646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.600916  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (2.266821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.601194  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.601329  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.601695  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:33.601713  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:33.601810  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.601850  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.604213  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (1.566986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.605451  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (3.187017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.605726  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.605825  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.606046  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:33.606083  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:33.606164  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.606210  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.606786  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (16.458427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.607816  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.26774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.608102  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.608966  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (2.163336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.609202  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.609310  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:33.609328  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:33.609399  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.609447  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.610876  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.267661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.611147  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.611581  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.919243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.611812  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.611843  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:33.611852  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:33.611922  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.611956  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.613513  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.252101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.613911  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.69574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.614169  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.614300  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.614402  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:33.614413  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:33.614499  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.614532  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.615974  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.272285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.616243  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.616728  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:33.616749  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:33.616846  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.616888  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.618674  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.26786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43442]
I0320 23:44:33.618831  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-3.158dcf634ede3808: (3.685428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.619083  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.691154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.619462  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.620189  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.620640  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (5.867986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.620667  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:33.620778  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:33.620907  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.620943  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.621655  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.623586  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-4.158dcf634f5c7359: (3.329618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.624084  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (2.818418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43442]
I0320 23:44:33.624463  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (4.267504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43274]
I0320 23:44:33.624749  106048 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0320 23:44:33.624916  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.625897  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:33.625914  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:33.626009  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.626043  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.628253  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-0.158dcf634dc004ae: (2.82634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.629240  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (4.119093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43444]
I0320 23:44:33.629659  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (7.847091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.630045  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (3.155315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.630164  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.630305  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.630468  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (4.200864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43446]
I0320 23:44:33.631104  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.631684  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:33.631754  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:33.631872  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.631935  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.632751  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-5.158dcf6350598f58: (3.601515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.634205  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.976561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.634613  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (2.364701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.634816  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.635016  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.635077  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:33.635087  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:33.635164  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.635201  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.636478  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (6.69449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43444]
I0320 23:44:33.636808  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-6.158dcf6350d19080: (3.019911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.638760  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (3.331045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.639218  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (3.865724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.639553  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (2.711999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43444]
I0320 23:44:33.641253  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-7.158dcf6351ef6b65: (3.28408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.641852  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.641540  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.642581  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:33.642596  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:33.642666  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.642707  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.644549  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.652616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.644949  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.772202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.645675  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.646381  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (3.542089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.646685  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.646907  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:33.646950  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:33.647080  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.647153  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.647194  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-2.158dcf634e9c0e8f: (4.012885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.648903  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.179225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.649224  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.738483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.649313  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (2.741465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43276]
I0320 23:44:33.649528  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.649537  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.650154  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:33.650178  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:33.650253  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.650295  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.652091  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-9.158dcf6352f4172b: (4.304238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.652927  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.377889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.653314  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (2.044334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43452]
I0320 23:44:33.653565  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.653686  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.654281  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:33.654309  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:33.654388  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.654439  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.655828  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.158485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43452]
I0320 23:44:33.656147  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.656443  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (6.32871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.656803  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (2.200648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.657004  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.657139  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:33.657151  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:33.657222  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.657254  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.658172  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.372671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.658320  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-10.158dcf635344152d: (4.514213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.659086  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.307955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.659306  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.659466  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:33.659485  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:33.659556  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.659598  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.659703  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.907885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43452]
I0320 23:44:33.659893  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.363551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.660157  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.661894  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.778595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.662378  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (2.270436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43454]
I0320 23:44:33.662860  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (2.435616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.663292  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.663457  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.663899  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:33.663924  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:33.664001  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.664040  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.664513  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.143954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.666173  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-11.158dcf635396def9: (7.298082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.666297  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.826183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.666653  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (2.286687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43452]
I0320 23:44:33.666699  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.666845  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:33.666856  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:33.666938  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.666973  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.667023  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.670699  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-13.158dcf63545243ee: (3.000554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.673795  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-14.158dcf6354e4d6e5: (2.281874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.678765  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-15.158dcf6355f5068c: (4.25796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.682692  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-17.158dcf63578073e5: (3.339824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.684519  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:33.686146  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:33.686160  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:33.686179  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:33.686650  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-21.158dcf63593e1dd8: (2.838323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.688266  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:33.689711  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (22.393832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.689720  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (22.322327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.689997  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.690037  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.690192  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:33.690207  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:33.690294  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.690331  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.691926  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.419461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.692016  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-8.158dcf63525fb03a: (4.770419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.692153  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.560455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43440]
I0320 23:44:33.692223  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.692343  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:33.692353  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:33.692434  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.692468  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.692521  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.693147  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (26.870807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.694357  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.733763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.694564  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.610522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.694573  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.694884  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:33.694901  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:33.694904  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.695001  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.695038  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.696023  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-22.158dcf6359d4dcc0: (3.396074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.697763  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.948874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.698213  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (2.73152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.699157  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.699294  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.702480  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-23.158dcf635a3d3de2: (5.515246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43450]
I0320 23:44:33.705500  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-25.158dcf635ade6d15: (2.442994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.708459  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-26.158dcf635b28f6a3: (2.445724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.709630  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.503957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.710197  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:33.711812  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:33.711996  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.712092  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.711666  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.673108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.713019  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-28.158dcf635be297e2: (3.744114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.714251  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.350009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.714548  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.992649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.715557  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.715874  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (2.815741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43462]
I0320 23:44:33.715983  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:33.716005  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:33.716126  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.716173  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.716453  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.717845  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.045992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.718217  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.718237  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.764264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.719271  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.719484  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:33.719507  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:33.719592  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.719633  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.719671  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (2.948503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43462]
I0320 23:44:33.720491  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-30.158dcf635cad962c: (4.948305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.721981  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (2.107191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.722277  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.722642  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (2.832865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.722877  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.723019  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:33.723042  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:33.723145  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.723194  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.725099  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-12.158dcf6353e34fa5: (2.763085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.726457  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (2.580848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.726863  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (3.313981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.727110  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.727270  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.727538  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (6.703236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43462]
I0320 23:44:33.728977  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-31.158dcf635d70bc05: (3.328477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.729239  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.141715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0320 23:44:33.729917  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:33.729939  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:33.730044  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.730179  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.730592  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (999.951µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.733187  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (2.192885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0320 23:44:33.733804  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (2.475984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0320 23:44:33.734164  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-33.158dcf635e4aa6b4: (4.476844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43448]
I0320 23:44:33.734909  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (3.886881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.735154  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.735732  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:33.735750  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:33.735841  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.735876  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.737156  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.737622  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-34.158dcf635e9622b6: (2.793712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.738026  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (3.362491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0320 23:44:33.738744  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (2.051242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I0320 23:44:33.738921  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (2.648814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.739004  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.739129  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.739358  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:33.739375  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:33.739467  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.739503  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.739991  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.633333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.740946  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.289424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.741236  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.741472  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.140461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.741650  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.611757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I0320 23:44:33.741887  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.741988  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:33.742005  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:33.742169  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.742241  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.743018  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-38.158dcf6360447eba: (4.760906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0320 23:44:33.744013  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.068641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.744274  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.744619  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (2.129531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.744879  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.746694  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.576449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.747274  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-39.158dcf636089b2fc: (2.770484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43456]
I0320 23:44:33.747588  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:33.747608  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:33.747975  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.748113  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.748927  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.004207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.750238  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.78129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.751084  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.751619  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-40.158dcf6361004b94: (3.405019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0320 23:44:33.752112  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.713557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.752658  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (2.664011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.752955  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.753139  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:33.753170  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:33.753251  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.753295  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.755683  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (3.001786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.756560  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (3.002515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
I0320 23:44:33.756991  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-16.158dcf63565c5e61: (4.877407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0320 23:44:33.758387  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.356338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.758978  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.760397  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (6.873785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.766047  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.509676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.767752  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-43.158dcf6362863944: (10.198497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0320 23:44:33.768511  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.87247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.770229  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.222882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.770792  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-45.158dcf6363cd25c7: (2.469745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.771488  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.771783  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:33.771997  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:33.772124  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.160104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.772161  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.772258  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.773945  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-18.158dcf6358202230: (2.558992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.774682  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (1.709836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
I0320 23:44:33.774893  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.775244  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (2.575733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0320 23:44:33.775312  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (951.267µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43478]
I0320 23:44:33.775557  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.776778  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.05399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
I0320 23:44:33.777158  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-46.158dcf636475ec19: (2.655415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0320 23:44:33.777838  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:33.777863  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:33.777953  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.778000  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.778698  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.362882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
I0320 23:44:33.780753  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.978641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43484]
I0320 23:44:33.780841  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (2.362721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43482]
I0320 23:44:33.780981  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.781098  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.781271  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:33.781294  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:33.781372  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.781410  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.783155  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-19.158dcf635869a569: (5.462184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43480]
I0320 23:44:33.786215  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-49.158dcf63660d77bd: (2.516596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43480]
I0320 23:44:33.786306  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (4.72506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43484]
I0320 23:44:33.786605  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.786877  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:33.786902  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:33.787008  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.787067  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.788279  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (8.463264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
I0320 23:44:33.788298  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (6.353007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43474]
I0320 23:44:33.792500  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-24.158dcf635a8aaa5e: (4.543361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.796010  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-27.158dcf635b8e5c04: (2.530049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.797575  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (9.969247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43480]
I0320 23:44:33.798196  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (10.89084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43484]
I0320 23:44:33.798559  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.798793  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.800071  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (11.109541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
I0320 23:44:33.800621  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.801825  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.321431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43484]
I0320 23:44:33.802261  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:33.802275  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:33.802371  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.802410  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.804729  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.564435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43488]
I0320 23:44:33.805119  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (2.792955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43484]
I0320 23:44:33.805223  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.805537  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (2.883782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.805729  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.807136  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-29.158dcf635c578611: (3.934263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43490]
I0320 23:44:33.807206  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.735645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43484]
I0320 23:44:33.807534  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:33.807549  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:33.808891  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.276545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.809069  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.809128  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.810671  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.300052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43488]
I0320 23:44:33.810956  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.67167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.811170  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.812119  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.722445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43494]
I0320 23:44:33.813159  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-32.158dcf635ddb20c4: (2.623945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43492]
I0320 23:44:33.813455  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.814335  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.221979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43494]
I0320 23:44:33.814586  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:33.815229  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:33.816004  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.08567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.815329  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.816769  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.817551  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.092018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.819980  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (3.042215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43488]
I0320 23:44:33.820221  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.820286  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (2.674249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.820396  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:33.820411  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:33.820484  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (2.633108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43486]
I0320 23:44:33.820598  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.820650  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.820742  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.821645  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-20.158dcf6358dead5c: (3.408889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.821903  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.066385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.822303  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.396662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43488]
I0320 23:44:33.822547  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.823646  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (2.246405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43500]
I0320 23:44:33.823873  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.824310  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (1.502456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.824880  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:33.824898  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:33.824985  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.825022  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.825910  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (1.070296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.826682  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.045744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.827380  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.827506  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.180142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.828960  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.035045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.829237  106048 preemption_test.go:598] Cleaning up all pods...
I0320 23:44:33.844875  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (5.630101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43500]
I0320 23:44:33.845339  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.845569  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:33.845598  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:33.845719  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.845794  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.847347  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (17.956991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.883763  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (37.093736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.884147  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.884504  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (37.814514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43500]
I0320 23:44:33.885001  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.885253  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:33.885269  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:33.885519  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.885622  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.887733  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.720451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43500]
I0320 23:44:33.888740  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.888880  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:33.888896  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:33.888974  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.889108  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.889223  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (2.726718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.889972  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (42.277042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.892228  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (2.40961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.892825  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.893940  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.895822  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (5.837045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43500]
I0320 23:44:33.897264  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.897392  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:33.897402  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:33.897481  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.900354  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.902723  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.642531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.903165  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (2.525281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.903460  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.903591  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.903751  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:33.903768  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:33.903848  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.903886  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.905219  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (13.707213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.906408  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-35.158dcf635f4959cd: (83.888149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.909546  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-36.158dcf635f91bd63: (2.565204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.912125  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-37.158dcf635fe7fb2b: (2.098979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.915595  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-41.158dcf636180c0f0: (2.946767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.916563  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (12.272415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.916915  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (12.332543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.917346  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.917510  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:33.917528  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:33.917604  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:33.917649  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:33.918027  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.921906  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.518724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.922283  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (2.135292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.922928  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:33.923109  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:33.923521  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-42.158dcf6361de7cc7: (3.748542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43496]
I0320 23:44:33.924825  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (18.946813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.925000  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:33.925034  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:33.927764  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-44.158dcf636338217a: (2.241006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.928929  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:33.928961  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:33.931122  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (5.138813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.931746  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-47.158dcf63650c728c: (3.417728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.934848  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-48.158dcf6365aea129: (2.498044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.935655  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:33.935689  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:33.938226  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (6.726068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43498]
I0320 23:44:33.938519  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.327462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.941781  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:33.942235  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:33.943446  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (4.392525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.944983  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (6.116365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.946530  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:33.946566  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:33.948251  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.442371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.948408  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (4.685286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.951619  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:33.951651  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:33.953668  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (4.826786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.954861  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (6.127066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.957091  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.73286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.957492  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:33.957593  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:33.960317  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (6.179712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.961246  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.596924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.964009  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.308618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.964757  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:33.964793  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:33.966398  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.348706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.967003  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (6.383382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.969750  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:33.969787  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:33.972025  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.00592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.972708  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (5.401469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.978719  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:33.978753  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:33.980261  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.202812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.982685  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (9.58876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.985286  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:33.985329  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:33.987135  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (4.086801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.988486  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.78496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.990309  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:33.990350  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:33.992025  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.412033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:33.993611  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (5.967554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:33.997115  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:33.997155  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:33.999120  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.731425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.000517  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (6.582934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.004785  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:34.004823  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:34.005402  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (4.382467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.006844  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.437486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.009604  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:34.009649  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:34.010934  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (5.159591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.012152  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.143634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.014949  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (3.446604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.015351  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:34.015385  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:34.017405  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.439165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.019392  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:34.019433  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:34.023166  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (7.914133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.023749  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.484061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.026350  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:34.026391  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:34.027630  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (4.203932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.028130  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.233006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.030675  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:34.030716  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:34.032460  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.451733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.032812  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (4.868307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.036346  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:34.036376  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:34.038441  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.302474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.038840  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (5.577299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.041945  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:34.041990  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:34.043892  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.484107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.045009  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (5.847301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.049449  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:34.049533  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:34.051327  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.471577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.052276  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (6.884981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.064694  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:34.064728  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:34.066412  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.453472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.066945  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (14.368795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.070218  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:34.070251  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:34.071753  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.262195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.073377  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (6.049955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.076170  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:34.076213  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:34.082191  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.768839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.083987  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (10.345122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.086867  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:34.086902  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:34.088162  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (3.902572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.088654  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.488001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.093030  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:34.093079  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:34.094650  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.366546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.095373  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (6.561649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.098291  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:34.098367  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:34.099986  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.294861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.100780  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (5.043272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.104995  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:34.105025  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:34.106329  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (3.782918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.106915  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.671165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.111880  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (5.069837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.111896  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:34.111955  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:34.113624  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.351896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.115712  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:34.115753  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:34.117150  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (4.97411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.117442  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.440668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.123176  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:34.123271  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:34.124607  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (5.193359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.125798  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.14555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.128619  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:34.128655  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:34.131336  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.01917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.132203  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (6.313376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.135008  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:34.135093  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:34.137897  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (5.348724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.158496  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (23.084007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.174957  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:34.175019  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:34.176578  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (38.35064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.177187  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.843749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.180261  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:34.180389  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:34.182892  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.557474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.191025  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (13.851438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.196098  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:34.196136  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:34.199558  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (8.211818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.207312  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:34.207383  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:34.209950  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.334202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.212731  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (11.968158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.213376  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.59777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.216546  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:34.216587  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:34.218392  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (4.510974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.222452  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:34.222488  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:34.224740  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (5.79576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.226261  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (9.447262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.237255  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (10.415874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.244902  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:34.244941  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:34.249946  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (24.932084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.254156  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (8.381966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.256666  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:34.256712  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:34.260314  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (9.830484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.263488  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (6.101311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.264460  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:34.264506  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:34.265973  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (5.156109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.268209  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.262048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.270001  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:34.270031  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:34.271747  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.510323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.274616  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (8.092294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.287790  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:34.287831  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:34.296289  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (8.17222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.296914  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (21.80148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.300496  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:34.300568  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:34.304704  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.810816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.308102  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (10.761244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.311814  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:34.311859  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:34.313760  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.631254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.314956  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (6.415573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.320280  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (4.391512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.323743  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (3.13612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.329353  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (5.166944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.331638  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (784.665µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.334741  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.620478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.337114  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (919.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.340163  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (1.6203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.342875  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.244948ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.345182  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (848.344µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.348024  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (798.119µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.351166  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.678419ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.354773  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.023571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.357766  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.33994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.360330  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.066573ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.363132  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.187678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.366557  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.51029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.368768  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (790.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.370996  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (723.586µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.374449  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.983671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.379552  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.043328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.381951  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (826.975µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.385085  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.509035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.387893  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (861.651µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.391361  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.996536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.393898  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (915.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.396492  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.119031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.401635  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (969.831µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.405235  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (2.144596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.409600  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (2.603669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.411917  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (884.079µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.417106  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (3.607219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.420466  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.547622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.423330  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.355344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.426913  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.38393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.429490  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.067149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.433307  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (2.283736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.436605  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (871.769µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.439028  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (892.364µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.442026  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (928.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.444729  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.196756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.457900  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (3.420639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.460630  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.03018ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.463472  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.203796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.466326  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.291365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.472935  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.515014ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.476575  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (2.106027ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.478995  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (906.585µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.481386  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (797.345µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.483904  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (949.777µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.486332  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (903.675µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.488777  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (912.865µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.491202  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (920.791µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.493660  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (874.913µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.496182  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (894.932µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.498660  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (968.248µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.500869  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (742.871µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.506597  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (5.133049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.506903  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0
I0320 23:44:34.506916  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0
I0320 23:44:34.507031  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0", node "node1"
I0320 23:44:34.507043  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0320 23:44:34.507118  106048 factory.go:733] Attempting to bind rpod-0 to node1
I0320 23:44:34.509577  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.712901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.509876  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0/binding: (2.510293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.510139  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:34.510885  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1
I0320 23:44:34.510900  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1
I0320 23:44:34.510999  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1", node "node1"
I0320 23:44:34.511013  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0320 23:44:34.511694  106048 factory.go:733] Attempting to bind rpod-1 to node1
I0320 23:44:34.512002  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.607824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.514449  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1/binding: (2.260338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43540]
I0320 23:44:34.514625  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:34.516265  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.406766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.615306  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (2.036918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.684749  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:34.686312  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:34.686385  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:34.686473  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:34.688461  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:34.718171  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (1.95421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.718567  106048 preemption_test.go:561] Creating the preemptor pod...
I0320 23:44:34.721087  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.157917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.722088  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:34.722111  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:34.722216  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.722265  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.723126  106048 preemption_test.go:567] Creating additional pods...
I0320 23:44:34.726731  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.954024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.727788  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.971061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43548]
I0320 23:44:34.728206  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/status: (4.451499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43502]
I0320 23:44:34.732209  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.294483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.732416  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.732919  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.61912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43548]
I0320 23:44:34.734158  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (9.624843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43546]
I0320 23:44:34.736290  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.038498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43548]
I0320 23:44:34.736813  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/status: (4.061076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43506]
I0320 23:44:34.740342  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.980341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43546]
I0320 23:44:34.742385  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (4.928299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.742396  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.696111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43546]
I0320 23:44:34.742634  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:34.742654  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:34.742773  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.742815  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.746458  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0/status: (3.230637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.746499  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.134993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43550]
I0320 23:44:34.747502  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.252897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43546]
I0320 23:44:34.748484  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.580253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.748894  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (1.871588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43550]
I0320 23:44:34.749289  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.749488  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:34.749503  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:34.749591  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.749627  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.749900  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (6.511224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43552]
I0320 23:44:34.751511  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.357495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.752238  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.769984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43552]
I0320 23:44:34.752613  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.649368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43546]
I0320 23:44:34.756207  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1/status: (5.830063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43550]
I0320 23:44:34.756274  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.17933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.759215  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.515154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.759661  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (2.870319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43554]
I0320 23:44:34.759897  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.760074  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:34.760091  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:34.760160  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.760193  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.763018  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.326824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I0320 23:44:34.764581  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2/status: (3.537678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43554]
I0320 23:44:34.765090  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (4.003351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43556]
I0320 23:44:34.767945  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.83924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43556]
I0320 23:44:34.768241  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.769285  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:34.769304  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:34.769407  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.769433  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (9.737287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43544]
I0320 23:44:34.769462  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.772198  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.166886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I0320 23:44:34.772540  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3/status: (2.159098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43556]
I0320 23:44:34.772917  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.5864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43562]
I0320 23:44:34.773431  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.281934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43560]
I0320 23:44:34.777826  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (4.3322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43562]
I0320 23:44:34.778170  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.778254  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.135403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43560]
I0320 23:44:34.778827  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:34.778848  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:34.778955  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.779012  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.781236  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.48845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.782804  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4/status: (3.563863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I0320 23:44:34.783217  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.551494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43562]
I0320 23:44:34.786570  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (7.004122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43564]
I0320 23:44:34.786810  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.341964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.787758  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (3.638205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I0320 23:44:34.788036  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.789012  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:34.789026  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:34.789341  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.789384  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.789446  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.216637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.792319  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.464994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.792597  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (2.915359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43564]
I0320 23:44:34.793040  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-0.158dcf63f5a79b10: (2.812426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43568]
I0320 23:44:34.795256  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (4.699467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43558]
I0320 23:44:34.795498  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.795811  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:34.795853  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:34.795934  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.795990  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.796167  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.532286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.798026  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (1.402053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43564]
I0320 23:44:34.798992  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.801781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.799587  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.473915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43572]
I0320 23:44:34.799680  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5/status: (2.944431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43568]
I0320 23:44:34.801116  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (1.06744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.801380  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.801828  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.456874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43564]
I0320 23:44:34.802164  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:34.802188  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:34.802290  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.802330  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.804567  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.301778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43576]
I0320 23:44:34.805223  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.982722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43566]
I0320 23:44:34.805285  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6/status: (2.239578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43570]
I0320 23:44:34.805591  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (2.352584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43574]
I0320 23:44:34.806873  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.024035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43570]
I0320 23:44:34.807095  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.807287  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:34.807333  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:34.807594  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.807629  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.808135  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.179076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43576]
I0320 23:44:34.809720  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7/status: (1.894205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43570]
I0320 23:44:34.810900  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.058212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I0320 23:44:34.810933  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.809455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.813180  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.685697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I0320 23:44:34.815226  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.702803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43582]
I0320 23:44:34.816273  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (5.065219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43570]
I0320 23:44:34.816805  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (2.708412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43576]
I0320 23:44:34.818321  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.818511  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:34.818528  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:34.818631  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.818685  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.819642  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.963509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43578]
I0320 23:44:34.821104  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.831354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43584]
I0320 23:44:34.821556  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (2.565883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.821831  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8/status: (2.107202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43576]
I0320 23:44:34.823392  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.137703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.823679  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.823911  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:34.823964  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:34.824183  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.956635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43578]
I0320 23:44:34.824267  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.824573  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.826248  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.582751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.826447  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.763889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43584]
I0320 23:44:34.826730  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.241693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43586]
I0320 23:44:34.828242  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9/status: (1.669632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.829581  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.920287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43584]
I0320 23:44:34.831416  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.1906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.831698  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.831936  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:34.831983  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:34.832160  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.832198  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.246414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43584]
I0320 23:44:34.832248  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.834171  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.394209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43586]
I0320 23:44:34.834926  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.712388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.836349  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.478515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43590]
I0320 23:44:34.838573  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10/status: (5.764233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.840989  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.801375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.841251  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.040863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43580]
I0320 23:44:34.841716  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.841879  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:34.841893  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:34.841973  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.842037  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.845349  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.427955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I0320 23:44:34.846566  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.701854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.847655  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.985355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43596]
I0320 23:44:34.848268  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11/status: (5.546668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43586]
I0320 23:44:34.850245  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.353326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43596]
I0320 23:44:34.850846  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.851120  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:34.851163  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:34.851201  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.638809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.851327  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.851395  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.853833  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.519417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43596]
I0320 23:44:34.854954  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (3.044426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43594]
I0320 23:44:34.855350  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.855822  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-4.158dcf63f7cff1d7: (3.147086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43598]
I0320 23:44:34.855900  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:34.856019  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:34.856136  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.856188  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.857242  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.790556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43596]
I0320 23:44:34.859085  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.215903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43600]
I0320 23:44:34.859698  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (3.288008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.860494  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12/status: (3.934846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43598]
I0320 23:44:34.860619  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.790372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43596]
I0320 23:44:34.864265  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (3.237211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.864550  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.864833  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:34.864855  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:34.864864  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.673723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43600]
I0320 23:44:34.864972  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.865020  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.867358  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.320926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43604]
I0320 23:44:34.868154  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13/status: (2.850032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43600]
I0320 23:44:34.868609  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.334313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.869312  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (3.430294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I0320 23:44:34.869780  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.266593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43600]
I0320 23:44:34.870028  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.870228  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:34.870317  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:34.870416  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.871120  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.157374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.871386  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.875682  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.134115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43604]
I0320 23:44:34.876266  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (2.65553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I0320 23:44:34.876347  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14/status: (2.206954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43592]
I0320 23:44:34.877934  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.528872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43608]
I0320 23:44:34.877945  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.022416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43602]
I0320 23:44:34.878267  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.878435  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:34.878453  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:34.878584  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.878627  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.880378  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.068473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43608]
I0320 23:44:34.881224  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15/status: (2.292179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43606]
I0320 23:44:34.881373  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.879872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.882385  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.575438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43612]
I0320 23:44:34.883432  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.446972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43606]
I0320 23:44:34.883525  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.24735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43608]
I0320 23:44:34.883685  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.883939  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:34.883990  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:34.884137  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.884258  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.885573  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.183168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.887131  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16/status: (2.080375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43614]
I0320 23:44:34.887363  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.653537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43616]
I0320 23:44:34.887516  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.278654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43612]
I0320 23:44:34.889125  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.548353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43614]
I0320 23:44:34.889335  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.889486  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.619796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43616]
I0320 23:44:34.889505  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:34.889731  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:34.889838  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.889886  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.892343  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.858436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.893471  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17/status: (3.00708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43614]
I0320 23:44:34.896284  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.131438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.898527  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (3.443416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.898867  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.899119  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:34.899147  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:34.899238  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.899282  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.901030  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.347513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.902503  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18/status: (2.995942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.905720  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.040012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.907264  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.192211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.907480  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.907649  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:34.907670  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:34.907785  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.907829  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.907978  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.586411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.909178  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (927.523µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.910224  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19/status: (2.115824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.911348  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.617049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.911700  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (998.13µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.911893  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.912091  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:34.912108  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:34.912183  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.912227  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.913893  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.414815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.914357  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.589953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.915779  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20/status: (3.29509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.916180  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (25.529346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43616]
I0320 23:44:34.916842  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.742245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.917585  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (17.97097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43618]
I0320 23:44:34.918236  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.623512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43616]
I0320 23:44:34.918631  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (2.3134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.918938  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.919133  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:34.919150  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:34.919234  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.919274  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.920876  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.358304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.920954  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.284536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.922652  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.922948  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:34.922963  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.184958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43620]
I0320 23:44:34.922965  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:34.923082  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.923132  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.924833  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.162648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.925559  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21/status: (1.951864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.926949  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.699334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43624]
I0320 23:44:34.927607  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.357819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.927928  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.928087  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:34.928149  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:34.928269  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.928322  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.930499  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.851886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.930568  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22/status: (2.008928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.933006  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.137446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.933268  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.933449  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:34.933508  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:34.933561  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-7.158dcf63f984a956: (2.536884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.933735  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.933794  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.935233  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.087386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.935968  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.439294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43628]
I0320 23:44:34.937875  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.487875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43628]
I0320 23:44:34.939925  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.513886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I0320 23:44:34.940141  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23/status: (2.681468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43610]
I0320 23:44:34.942295  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.403917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I0320 23:44:34.942565  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.942823  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:34.942908  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:34.943611  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.945093  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.946619  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.311613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I0320 23:44:34.947680  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24/status: (2.368763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.948695  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.95815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I0320 23:44:34.949585  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.394724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43622]
I0320 23:44:34.951243  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.951408  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:34.951435  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:34.951555  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.951600  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.954332  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (2.223128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I0320 23:44:34.954435  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25/status: (2.54512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I0320 23:44:34.955119  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.560657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43632]
I0320 23:44:34.956208  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.145319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43630]
I0320 23:44:34.957564  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.957850  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:34.957881  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:34.957999  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.958048  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.959840  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.356521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I0320 23:44:34.960757  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26/status: (2.411812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43632]
I0320 23:44:34.961475  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.997232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I0320 23:44:34.963954  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (2.665859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43632]
I0320 23:44:34.964243  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.964449  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:34.964498  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:34.964616  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.964659  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.967228  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.958517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I0320 23:44:34.967295  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.938514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43626]
I0320 23:44:34.968604  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27/status: (3.726734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43632]
I0320 23:44:34.970330  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.239957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I0320 23:44:34.970649  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.970850  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:34.970889  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:34.971012  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.971089  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.973318  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.639618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:34.974642  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28/status: (2.954988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I0320 23:44:34.975549  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.55157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I0320 23:44:34.977289  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (2.099427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43634]
I0320 23:44:34.977566  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.977726  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:34.977743  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:34.977813  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.977902  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.980525  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (2.368501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:34.980831  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.320922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43640]
I0320 23:44:34.981434  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29/status: (3.258181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43638]
I0320 23:44:34.983919  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (2.079671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43640]
I0320 23:44:34.984253  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.984483  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:34.984505  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:34.984634  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.984726  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.986988  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.839054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43640]
I0320 23:44:34.987397  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.184586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:34.989704  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30/status: (1.983476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43640]
I0320 23:44:34.992141  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.972159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43642]
I0320 23:44:34.992673  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.993205  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:34.993225  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:34.993347  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.993396  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:34.995899  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.898451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:34.996406  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.426271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43644]
I0320 23:44:34.996924  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31/status: (2.87825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43642]
I0320 23:44:34.998820  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.412544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43644]
I0320 23:44:34.999149  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:34.999346  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:34.999377  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:34.999489  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:34.999543  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.002120  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.592216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.002503  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (2.644906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:35.005773  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32/status: (5.980806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43644]
I0320 23:44:35.007600  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.354902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:35.007890  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.008104  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:35.008155  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:35.008260  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.008306  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.010700  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.635738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43648]
I0320 23:44:35.010875  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.902242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.011271  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33/status: (2.740843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43636]
I0320 23:44:35.013270  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.16162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.013871  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.014074  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:35.014100  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:35.014255  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.014324  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.015932  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.23019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.017298  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.090329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.017626  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34/status: (2.743957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43648]
I0320 23:44:35.018304  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.064291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.019566  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.3873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43648]
I0320 23:44:35.019893  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.020078  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:35.020099  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:35.020198  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.020254  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.022365  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35/status: (1.873934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.022743  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.35279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.025370  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.485743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43652]
I0320 23:44:35.025945  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (3.175535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43646]
I0320 23:44:35.026283  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.026469  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:35.026520  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:35.026651  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.026732  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.028938  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.600385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.029136  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36/status: (2.113984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43652]
I0320 23:44:35.029638  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.96535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.030557  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.041047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43652]
I0320 23:44:35.030837  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.031015  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:35.031035  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:35.031182  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.031253  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.032997  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.320805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.033048  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.591956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.033372  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.033619  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:35.033656  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:35.033795  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.033860  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.035227  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.07972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.036335  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37/status: (2.12996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.037821  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-13.158dcf63fcf044b4: (4.979295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43656]
I0320 23:44:35.038262  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.42635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.038556  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.038771  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:35.038814  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:35.038928  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.038988  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.041037  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.334385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43658]
I0320 23:44:35.041241  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38/status: (1.982804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.041486  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.284493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.042781  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.087322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.043884  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.044012  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:35.044030  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:35.044139  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.044183  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.044366  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.449421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.045777  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.269086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43658]
I0320 23:44:35.046679  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39/status: (2.190364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.047530  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.76836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.048855  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.139885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43654]
I0320 23:44:35.049161  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.049312  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:35.049329  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:35.049407  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.049485  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.053726  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-15.158dcf63fdbff245: (3.038106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43658]
I0320 23:44:35.061616  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (2.538064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.062651  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.063008  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:35.063028  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:35.063155  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.063210  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.066511  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.480869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0320 23:44:35.066618  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (5.239986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43660]
I0320 23:44:35.067859  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.978251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43658]
I0320 23:44:35.069282  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40/status: (3.692094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43650]
I0320 23:44:35.071069  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.196923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0320 23:44:35.071322  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.071495  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:35.071520  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:35.071631  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.071687  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.073376  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.143884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0320 23:44:35.075154  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.26733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43664]
I0320 23:44:35.075748  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41/status: (3.744387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43660]
I0320 23:44:35.078344  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.346656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43664]
I0320 23:44:35.078591  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.078773  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:35.078792  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:35.078892  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.078940  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.081341  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.473803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.081365  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42/status: (2.168455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43664]
I0320 23:44:35.082279  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (2.714015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0320 23:44:35.091718  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (9.986748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43664]
I0320 23:44:35.092323  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.092526  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:35.092544  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:35.092656  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.092709  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.094700  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.221825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.095664  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43/status: (2.577933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0320 23:44:35.095292  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.803323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43668]
I0320 23:44:35.098079  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.939479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43662]
I0320 23:44:35.098417  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.098611  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:35.098647  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:35.098800  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.098873  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.100678  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.156255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.101672  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.725878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43670]
I0320 23:44:35.104592  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44/status: (5.052328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43668]
I0320 23:44:35.108818  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (2.261822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43670]
I0320 23:44:35.109236  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.109525  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:35.109572  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:35.109712  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.109783  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.112402  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.629073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.113074  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.906654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43670]
I0320 23:44:35.113413  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.113628  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:35.113679  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:35.114091  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.114189  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.114249  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-18.158dcf63fefb1e34: (3.081591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.116620  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (2.000365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.117273  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.272694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.117964  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45/status: (3.349239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43670]
I0320 23:44:35.120414  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.25282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.121345  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.121982  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (3.165633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.122340  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:35.122380  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:35.122507  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.122574  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.124399  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (1.518278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.125857  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.065434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43674]
I0320 23:44:35.126636  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46/status: (3.460168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43666]
I0320 23:44:35.128567  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (1.507697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43674]
I0320 23:44:35.128819  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.128982  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:35.128997  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:35.129155  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.129207  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.170159  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47/status: (40.187818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43674]
I0320 23:44:35.170160  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (40.256763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.185074  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (55.036744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43676]
I0320 23:44:35.199720  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (1.706818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.200233  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.200454  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:35.200481  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:35.200624  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.200699  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.206460  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.509037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43678]
I0320 23:44:35.207289  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (6.15088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43676]
I0320 23:44:35.208412  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48/status: (6.80048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43672]
I0320 23:44:35.209915  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.113537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43676]
I0320 23:44:35.210289  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.210507  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:35.210546  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:35.210657  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.210728  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.212592  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.009972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43678]
I0320 23:44:35.213976  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.821316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.216641  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49/status: (5.015912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43676]
I0320 23:44:35.218565  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.25386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.218771  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.219331  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:35.219347  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:35.219456  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.219497  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.224585  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-25.158dcf640219685e: (4.162645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43684]
I0320 23:44:35.227604  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (6.566957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43678]
I0320 23:44:35.228020  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (9.034113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.228434  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.229369  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (8.55299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43682]
I0320 23:44:35.229686  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:35.229698  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:35.229808  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.229852  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.233641  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-26.158dcf64027bcd8a: (3.022908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43686]
I0320 23:44:35.235027  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (3.96923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43684]
I0320 23:44:35.235490  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (4.248762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.235778  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.235979  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:35.236005  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:35.236132  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.236187  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.238137  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.404743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43686]
I0320 23:44:35.238813  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (2.433476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.240653  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.240976  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:35.241016  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:35.241141  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.241217  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.245143  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.201085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.245382  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.245514  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:35.245571  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:35.245705  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.245773  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.247579  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.173003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.247633  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.500991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43690]
I0320 23:44:35.247879  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.248032  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:35.248079  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:35.248173  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.248220  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.250819  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-35.158dcf640630ff09: (11.644716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.251449  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (9.629036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43686]
I0320 23:44:35.254167  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-38.158dcf64074ed91b: (2.531195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.257005  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-42.158dcf6409b079a3: (2.20385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.260182  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-47.158dcf640caf6ee2: (2.571246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.261240  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (12.761751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43680]
I0320 23:44:35.261369  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (12.992969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43690]
I0320 23:44:35.261947  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.320883  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.944536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.421246  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.144813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.521128  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.066593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.621104  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.121977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.685166  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:35.688761  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:35.689275  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:35.689292  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:35.689458  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.689522  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.691802  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.990624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.691920  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.483476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43686]
I0320 23:44:35.692229  106048 backoff_utils.go:79] Backing off 4s
I0320 23:44:35.692472  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:35.692504  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:35.692516  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:35.692753  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.693046  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:35.693089  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:35.693187  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.693243  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.693806  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-1.158dcf63f60f9634: (3.094726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43694]
I0320 23:44:35.696593  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-2.158dcf63f6b0cb98: (2.220814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43686]
I0320 23:44:35.697076  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (3.640542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.697375  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.697549  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:35.697562  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:35.697670  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:35.697712  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:35.706541  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (8.648039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:35.706870  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-0.158dcf63f5a79b10: (8.289549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:35.707457  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:35.707855  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (13.762681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43694]
I0320 23:44:35.708213  106048 backoff_utils.go:79] Backing off 4s
I0320 23:44:35.709498  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (11.254962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43686]
I0320 23:44:35.709863  106048 backoff_utils.go:79] Backing off 4s
I0320 23:44:35.720446  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.503894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:35.821248  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.069263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:35.920898  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.860516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.020954  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.967964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.121160  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.122038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.220735  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.76277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.320854  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.889093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.421115  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.022162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.521036  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.988132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.581450  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:36.581485  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:36.581663  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod", node "node1"
I0320 23:44:36.581686  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0320 23:44:36.581742  106048 factory.go:733] Attempting to bind preemptor-pod to node1
I0320 23:44:36.581791  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:36.581811  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:36.581976  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.582027  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.583937  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (1.502912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:36.584257  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.584400  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:36.584415  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:36.584528  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.584574  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.585782  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (3.316965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.585933  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/binding: (3.715872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.586077  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.586538  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:36.586746  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (2.004388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43688]
I0320 23:44:36.586782  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (1.693774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43752]
I0320 23:44:36.587043  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.587099  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.587348  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:36.587362  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:36.587490  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.587539  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.588951  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-3.158dcf63f73e1eb5: (5.906497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.589278  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.564309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.589554  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.589730  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:36.589746  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:36.589855  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.589896  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.590937  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (3.159409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.591341  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.592209  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-5.158dcf63f8d2ff6a: (2.645174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.593129  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (3.011144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.593336  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.593563  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:36.593580  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:36.593674  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (3.407234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43754]
I0320 23:44:36.593705  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.593924  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.594041  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.38662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.594229  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.595417  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.165965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.595513  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.348767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.595696  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.595797  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.595900  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:36.595930  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:36.596040  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.596099  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.597610  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.221866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.597629  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.350826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.597836  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.597861  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.597979  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:36.597997  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:36.598095  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.598139  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.599529  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-6.158dcf63f933c22f: (4.944791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.599888  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.558507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.600290  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.879094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.600509  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.600663  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:36.600685  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:36.600788  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.600828  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.601781  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.602686  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.570208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.602935  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.603048  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (2.053791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.603348  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.603504  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:36.603525  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:36.603641  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.603697  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.604444  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-8.158dcf63fa2d375f: (4.314774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.605047  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.189558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.605311  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.605403  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.410588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.605455  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:36.605478  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:36.605714  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.605727  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.605785  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.607354  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.379113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.607609  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.607817  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-9.158dcf63fa86ff69: (2.794966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.608146  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.948218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43748]
I0320 23:44:36.608508  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.608641  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:36.608674  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:36.608834  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.608889  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.611018  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-10.158dcf63fafbfe1d: (2.447039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.611169  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (2.088006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.611211  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.780486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43756]
I0320 23:44:36.611667  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.611800  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:36.611829  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:36.611941  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.611953  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.612000  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.613757  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.291943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.613828  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.447172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.614032  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.614121  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.614267  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:36.614285  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:36.614361  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.614412  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.614798  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-11.158dcf63fb918288: (2.542997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.616172  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (1.082522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.616381  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.616511  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:36.616531  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:36.616606  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.616648  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.616991  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (2.14764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.617229  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.617682  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-4.158dcf63f7cff1d7: (2.337454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.617922  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.036206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43696]
I0320 23:44:36.618416  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.618579  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:36.618599  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:36.618770  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.618903  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.620132  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (2.874012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43760]
I0320 23:44:36.620401  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.620840  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.560243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.620951  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.704332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43762]
I0320 23:44:36.620955  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (2.133297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.621092  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.621245  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:36.621275  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:36.621386  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.621516  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.621792  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.621875  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-12.158dcf63fc698a01: (3.46499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.621977  106048 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0320 23:44:36.623150  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.285956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.623405  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (1.23949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.623483  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.623718  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.865483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.623806  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:36.623820  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:36.623895  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.623938  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.623966  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.626010  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (2.15535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.626078  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.86961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.636401  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.626554  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (2.42681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.633550  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-14.158dcf63fd4e0a17: (10.930647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43760]
I0320 23:44:36.636644  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.636884  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:36.636961  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:36.637115  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.637183  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.639780  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.997845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43766]
I0320 23:44:36.640157  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (2.785966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.640486  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (3.926213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.640740  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.640880  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.640948  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:36.640979  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:36.641086  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.641173  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.642351  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-16.158dcf63fe14f200: (5.109067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.643040  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.133186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.643321  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.536521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.643495  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.772879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.643747  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.643799  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.643975  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:36.644015  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:36.644151  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.644317  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.645918  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.377716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.646135  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (2.732989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.646239  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.646491  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-17.158dcf63fe6ba7e8: (3.356935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.647019  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (2.438146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.647466  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.647636  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:36.647666  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:36.647793  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.647850  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.648227  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (1.582612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.649524  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.349959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.650001  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.302739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.650167  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.694732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43750]
I0320 23:44:36.650344  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-19.158dcf63ff7d853c: (2.951607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.650399  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.650447  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.650569  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:36.650584  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:36.650682  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.650758  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.652135  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.193462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.652349  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.652707  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (2.212927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.652701  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:36.652844  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:36.652914  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.652970  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.653080  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.28641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43770]
I0320 23:44:36.653377  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.653408  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-20.158dcf63ffc0a4da: (2.465378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.654448  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.319829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.654815  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.494915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.655147  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.655279  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.615082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43770]
I0320 23:44:36.655481  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:36.655503  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:36.655586  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.655577  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.655661  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.656291  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.455979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.657216  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-7.158dcf63f984a956: (2.676291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43758]
I0320 23:44:36.657635  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.835658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.657344  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.198794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43772]
I0320 23:44:36.657850  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.657877  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.658015  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:36.658074  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:36.658168  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.658031  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.085409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.658225  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.659637  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.230386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43772]
I0320 23:44:36.660185  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.660688  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-21.158dcf640066f6b0: (2.30559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.660763  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.444321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43774]
I0320 23:44:36.661255  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (2.768693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.661626  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.661821  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:36.661933  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:36.662136  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.662191  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.662310  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.048189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.664278  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-22.158dcf6400b638c5: (2.811088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43772]
I0320 23:44:36.664297  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (1.919617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.664527  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.664658  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:36.664674  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:36.664760  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.664803  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.665094  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (2.281406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43776]
I0320 23:44:36.666394  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.204845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I0320 23:44:36.666765  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.666902  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:36.666920  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:36.666985  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.667028  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.669892  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (4.528727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.670182  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (6.958892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.670523  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.670903  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-23.158dcf64010993cb: (6.044043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43764]
I0320 23:44:36.671257  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.673535  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (2.882972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.673681  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (2.349186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43790]
I0320 23:44:36.673898  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (3.418345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I0320 23:44:36.674189  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.674312  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.674446  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:36.674473  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:36.674573  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.674677  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.675032  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-24.158dcf6401b55601: (3.507508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.675458  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.375613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43790]
I0320 23:44:36.676305  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.354646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.676625  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.676640  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.696455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I0320 23:44:36.676912  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.677043  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:36.677077  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:36.677139  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.677187  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.677930  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.615697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.679103  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.768087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I0320 23:44:36.679277  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.539879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.679478  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.679540  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.679739  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-27.158dcf6402e0affc: (3.541983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43790]
I0320 23:44:36.680632  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.274743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.680702  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:36.680735  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:36.680823  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.680877  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.682370  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.26235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.683156  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.683431  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.686273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43792]
I0320 23:44:36.683460  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-28.158dcf640342ca90: (2.770402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43768]
I0320 23:44:36.683665  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.683821  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:36.683837  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:36.683938  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.683989  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.684324  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (3.162235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I0320 23:44:36.685347  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:36.685925  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.775779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.686127  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.562348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43794]
I0320 23:44:36.686885  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.687097  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (2.318462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43778]
I0320 23:44:36.687187  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.687384  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:36.687415  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:36.687474  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-29.158dcf6403aabc3e: (3.02713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43792]
I0320 23:44:36.687511  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.687546  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.689120  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:36.690209  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (2.365827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.690358  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.634313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43798]
I0320 23:44:36.690516  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.690643  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.690691  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:36.690717  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (2.866193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43794]
I0320 23:44:36.690727  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:36.690943  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.691012  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.691669  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-30.158dcf640412a270: (3.305374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43796]
I0320 23:44:36.692957  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:36.692981  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:36.693001  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:36.693266  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.717618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.693371  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (2.145666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43798]
I0320 23:44:36.693764  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.694030  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.694448  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (2.150517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43796]
I0320 23:44:36.695171  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:36.695188  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:36.695272  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.695305  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-31.158dcf6404972cc7: (2.740161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43800]
I0320 23:44:36.695304  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.697295  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (970.452µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43798]
I0320 23:44:36.697734  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.074738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.698365  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-32.158dcf6404f4ed5d: (2.437563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0320 23:44:36.698414  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (2.692776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43800]
I0320 23:44:36.698614  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.698688  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.698829  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:36.698844  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:36.698950  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.699004  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.699919  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.569129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.700784  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.254906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43798]
I0320 23:44:36.701070  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.701317  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:36.701350  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:36.701463  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.701518  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.702650  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (2.145666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.702989  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (2.340337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.703283  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.703507  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.305935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43798]
I0320 23:44:36.703648  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.454949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43806]
I0320 23:44:36.703798  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.703941  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.704072  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:36.704095  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:36.704195  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.704236  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.705129  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.928006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.705570  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-33.158dcf64057aa893: (6.043775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0320 23:44:36.705661  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.263995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43798]
I0320 23:44:36.705836  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.406226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43780]
I0320 23:44:36.705991  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.706127  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:36.706148  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:36.706232  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.706280  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.706282  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.708844  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (2.148752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.708882  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (2.275536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0320 23:44:36.709105  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.709272  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.709608  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (2.473593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43810]
I0320 23:44:36.710002  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:36.710037  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:36.710168  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.710229  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.711866  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-34.158dcf6405d67db7: (4.647595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.711987  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.496903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.712331  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (2.180094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43802]
I0320 23:44:36.712335  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.616883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43812]
I0320 23:44:36.712873  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.712932  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.713195  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:36.713220  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:36.713314  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.713369  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.714172  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (1.216247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.714752  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.093444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43814]
I0320 23:44:36.714995  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.099183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.715226  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.715310  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.715365  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:36.715414  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:36.715516  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.715567  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.715609  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-36.158dcf640693c2a4: (2.614388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.716403  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.603153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.717233  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.342033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.717250  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.401732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43814]
I0320 23:44:36.717452  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.717531  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.717707  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:36.717732  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:36.717831  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.717875  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.718479  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (1.352815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.719299  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.250887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.719567  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.719705  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:36.719750  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:36.720028  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.720127  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.720277  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.377986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.720876  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-13.158dcf63fcf044b4: (2.967331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.721986  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.371531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.722012  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.546514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.722236  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.722411  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.722414  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (3.884648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43818]
I0320 23:44:36.722543  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:36.722560  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:36.722640  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.746647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43820]
I0320 23:44:36.722664  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.722709  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.722709  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.724477  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-37.158dcf6407009994: (2.778284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.724580  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.674388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.724965  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (2.012954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.725406  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (2.240992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43820]
I0320 23:44:36.725606  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.725644  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.725742  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:36.725758  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:36.725850  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.725887  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.727760  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.734363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.727879  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.841667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.728462  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.728474  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (2.222134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43822]
I0320 23:44:36.728581  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:36.728596  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:36.728678  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.728758  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:36.728799  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:36.729395  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.308072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.729853  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-39.158dcf64079e2238: (4.742243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.731256  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (2.210066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.731299  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (2.337595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43822]
I0320 23:44:36.731571  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:36.731586  106048 backoff_utils.go:79] Backing off 2s
I0320 23:44:36.732009  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.97451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43808]
I0320 23:44:36.737121  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (4.704038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43822]
I0320 23:44:36.740545  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-15.158dcf63fdbff245: (10.072888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.740667  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (3.058343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43822]
I0320 23:44:36.749075  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (7.641408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.749562  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-40.158dcf6408c04a8b: (4.99193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.750835  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.127937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.753113  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-41.158dcf640941c87f: (2.716155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.753972  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (2.560403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.756119  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.478893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.756230  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-43.158dcf640a8275a6: (2.487746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.757766  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.247339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.759642  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-44.158dcf640ae097a2: (2.684423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.759993  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.779082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.762389  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.970555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.762756  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-18.158dcf63fefb1e34: (2.498995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.764295  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (1.187094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.765941  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (1.21031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.766280  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-45.158dcf640bca3bc9: (2.83993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.768038  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.610162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.770073  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.539111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.770322  106048 preemption_test.go:598] Cleaning up all pods...
I0320 23:44:36.770492  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-46.158dcf640c4a45f0: (2.671646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.773468  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:36.773511  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:36.774371  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (3.811034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.777899  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:36.777941  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:36.779872  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (4.685569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.780452  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-48.158dcf6410f25268: (2.214583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.783172  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:36.783253  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:36.784243  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-49.158dcf64118b5e47: (3.135027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.784903  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (4.584675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.788020  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:36.788105  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:36.788750  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-25.158dcf640219685e: (2.7247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.791530  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-26.158dcf64027bcd8a: (2.190177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.791641  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (6.442336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.794182  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-35.158dcf640630ff09: (1.953676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.797224  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-42.158dcf6409b079a3: (2.373208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.797479  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (5.441631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.799856  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-38.158dcf64074ed91b: (2.086808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.802657  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-47.158dcf640caf6ee2: (2.174005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.804505  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.419245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.806350  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.444326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.808209  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.435698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.810868  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.202939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.814505  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:36.814577  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:36.816033  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (18.234426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.817587  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.402366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.825295  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:36.825344  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:36.827103  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (10.276829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.827911  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.276955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.830331  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:36.830367  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:36.831504  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (3.924352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.832088  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.358536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.834744  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:36.834780  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:36.835707  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (3.874008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.836669  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.608929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.839189  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:36.839235  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:36.839953  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (3.777498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.841123  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.592609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.842955  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:36.842993  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:36.844848  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (4.605972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.845200  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.931948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.848250  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:36.848285  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:36.849754  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (4.49648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.851250  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.947823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.853909  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:36.853968  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:36.854990  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (4.858989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.856883  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.599628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.858038  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:36.858094  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:36.860014  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (4.542147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.860417  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.033013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.863837  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:36.863876  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:36.865228  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (4.731297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.866126  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.599793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.868539  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:36.868602  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:36.869868  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (4.222062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.870617  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.601781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.873718  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:36.873759  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:36.875440  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (4.824309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.875746  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.601408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.878496  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:36.878534  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:36.897346  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (21.606142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.900760  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (9.327941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.907953  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:36.908071  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:36.915676  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (16.615033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.932277  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (9.825401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.935865  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:36.935902  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:36.942038  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.73216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.945902  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (15.079653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.966911  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:36.966986  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:36.968476  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (19.776522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.970498  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.972172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.978361  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:36.978417  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:36.980141  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (11.093235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.980645  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.845363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.984728  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:36.984795  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:36.986785  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.621683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:36.991504  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (10.900599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:36.997943  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (6.024496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.005241  106048 cacher.go:647] cacher (*core.Pod): 1 objects queued in incoming channel.
I0320 23:44:37.005296  106048 cacher.go:647] cacher (*core.Pod): 2 objects queued in incoming channel.
I0320 23:44:37.008410  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:37.008508  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:37.008835  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:37.008897  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:37.011893  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.939626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.013327  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (14.975359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.015220  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.246146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.018955  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:37.019002  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:37.019702  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (5.867979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.021318  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.991345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.025200  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (4.843259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.027298  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:37.027360  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:37.036772  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.592276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.039629  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:37.039673  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:37.041813  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.814413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.043444  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (8.999775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.048085  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:37.048154  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:37.049963  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (6.046326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.050328  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.743003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.053507  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:37.053555  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:37.055623  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (5.236474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.061182  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (5.06349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.062307  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (6.482464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.075081  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (13.477523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.075212  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:37.075358  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:37.078367  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:37.078402  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:37.086817  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (11.118211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.088664  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (12.788196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.093393  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (6.061552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.106101  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:37.106141  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:37.107571  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (14.109333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.109045  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.92617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.116277  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (8.389739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.122994  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:37.123030  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:37.123086  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:37.123114  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:37.124140  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (7.552746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.125509  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.115619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.127723  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.534959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.130387  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:37.130429  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:37.131904  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (6.671436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.132913  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.917402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.135655  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:37.135727  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:37.137735  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.265789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.138076  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (5.273268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.141130  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:37.141161  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:37.142409  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (4.012204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.143083  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.68223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.145939  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:37.146090  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:37.147667  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (4.822373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.148160  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.639264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.150831  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:37.150868  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:37.152843  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (4.801434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.153243  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.150602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.156156  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:37.156191  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:37.157522  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (4.3558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.159104  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.57956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.161686  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:37.161768  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:37.163942  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (5.027589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.164004  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.847963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.167405  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:37.167452  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:37.169999  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.809973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.170431  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (6.147501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.175429  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:37.175498  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:37.177781  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.045181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.179309  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (6.828674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.183500  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:37.183607  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:37.193746  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (14.144498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.201790  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (8.469258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.204505  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:37.204575  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:37.206873  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.971382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.209260  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (15.163875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.212918  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:37.212980  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:37.224028  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (14.023718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.227476  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:37.227519  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:37.227818  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (14.474075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.229078  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (4.603096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.229913  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.245782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.232782  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:37.232816  106048 scheduler.go:449] Skip schedule deleting pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:37.234514  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.478101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.235665  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (5.91354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.240119  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (4.09394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.241318  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (864.579µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.247324  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (5.681457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.250227  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (1.367564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.252852  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.149255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.255585  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.196449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.258323  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (1.026138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.260663  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (873.774µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.263156  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (993.167µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.266272  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.573473ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.269248  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.420514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.272229  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.414833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.275220  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.252971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.277843  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.175166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.280406  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.060681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.285843  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (3.866863ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.288547  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.152141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.291204  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.061988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.293998  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.067886ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.296773  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.166104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.299665  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.294257ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.302524  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.064652ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.305579  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (1.285451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.308293  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.108096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.311164  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.126597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.313924  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.178724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.317042  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.281156ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.319995  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.379778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.322500  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.324911  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (918.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.327435  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.055423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.329803  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (917.471µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.332737  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.336687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.335075  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (793.211µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.337648  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (961.266µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.340282  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (1.007448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.342615  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (783.103µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.345317  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.29106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.348030  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.093644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.350829  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.124038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.353261  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (950.594µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.355714  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (816.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.358549  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.268413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.361074  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (946.343µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.363750  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.141896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.368254  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (797.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.370519  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (741.793µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.373129  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.125678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.375638  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (1.008614ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.378235  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (977.158µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.380636  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (858.381µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.383011  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (911.657µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.385487  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (974.764µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.388257  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (959.154µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.390495  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (748.529µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.392787  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (790.343µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.395143  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.933865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.395938  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0
I0320 23:44:37.395963  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0
I0320 23:44:37.396120  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0", node "node1"
I0320 23:44:37.396140  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0320 23:44:37.396186  106048 factory.go:733] Attempting to bind rpod-0 to node1
I0320 23:44:37.398723  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.163222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.399211  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1
I0320 23:44:37.399588  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1
I0320 23:44:37.399436  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0/binding: (2.831042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.399732  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1", node "node1"
I0320 23:44:37.399821  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0320 23:44:37.399881  106048 factory.go:733] Attempting to bind rpod-1 to node1
I0320 23:44:37.400040  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:37.401686  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.362465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.403807  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1/binding: (1.651592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.404089  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:37.405708  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.360827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.502570  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-0: (2.221405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.605273  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (1.968076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.605580  106048 preemption_test.go:561] Creating the preemptor pod...
I0320 23:44:37.608158  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:37.608179  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:37.608285  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.608335  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.611265  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.922616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43832]
I0320 23:44:37.611724  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/status: (2.849314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.613559  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (4.685274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.614524  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.894306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43804]
I0320 23:44:37.614898  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.616295  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (10.480386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.617273  106048 preemption_test.go:567] Creating additional pods...
I0320 23:44:37.618320  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/status: (3.086008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.620247  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.788675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.626535  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (5.649501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.628691  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.747467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.633148  106048 wrap.go:47] DELETE /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/rpod-1: (14.510384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.634045  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:37.634080  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod
I0320 23:44:37.634208  106048 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod", node "node1"
I0320 23:44:37.634225  106048 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0320 23:44:37.634283  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:37.634290  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0
I0320 23:44:37.633763  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.573951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.634351  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.634390  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.634601  106048 factory.go:733] Attempting to bind preemptor-pod to node1
I0320 23:44:37.637627  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.993348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43832]
I0320 23:44:37.638219  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0/status: (3.187129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.638609  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (3.107742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43836]
I0320 23:44:37.638952  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod/binding: (3.166761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43834]
I0320 23:44:37.639244  106048 scheduler.go:572] pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0320 23:44:37.640973  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (2.319487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.641233  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.641393  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:37.641412  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:37.641533  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.641572  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.643674  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1/status: (1.901127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.644033  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (2.244037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43836]
I0320 23:44:37.645961  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (1.034579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.646187  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.646315  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:37.646332  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:37.646411  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.646460  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.647875  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (7.07958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43834]
I0320 23:44:37.648283  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2/status: (1.649553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.650146  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.503438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.650568  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.270981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43834]
I0320 23:44:37.650862  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.832225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.653338  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.653701  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:37.653768  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:37.653376  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.965217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.654023  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.654104  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.654739  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.732346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43836]
I0320 23:44:37.653927  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.037511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43834]
I0320 23:44:37.657313  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.214831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.657346  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.474646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43834]
I0320 23:44:37.657900  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3/status: (2.958932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.659465  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.090571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43836]
I0320 23:44:37.661309  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (1.727468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.661784  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.254497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.662207  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.025228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43834]
I0320 23:44:37.662578  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.662804  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:37.662827  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:37.662909  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.662949  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.665866  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (3.5448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.665987  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.097483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.666017  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4/status: (2.747807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.666590  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (3.013252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43840]
I0320 23:44:37.667920  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.58457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.668222  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.84169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.669984  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (2.02921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.670688  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.670949  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.346783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43830]
I0320 23:44:37.671001  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:37.671015  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1
I0320 23:44:37.671115  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.671151  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.673161  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.874448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.673734  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.92802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43840]
I0320 23:44:37.675780  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.661054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.676553  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (4.744615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43842]
I0320 23:44:37.677274  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (5.407345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.677561  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.677782  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:37.677816  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:37.677897  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.677958  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.680435  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.429773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.680533  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (2.358234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.681117  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5/status: (2.762622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43842]
I0320 23:44:37.681161  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-1.158dcf64a26f215a: (7.041339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43840]
I0320 23:44:37.683180  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.223815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.683866  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.092988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.683868  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (2.237307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43842]
I0320 23:44:37.684674  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.684787  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:37.684800  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2
I0320 23:44:37.684901  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.684974  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.685835  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:37.686137  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.649878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.687586  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (2.198458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43842]
I0320 23:44:37.687600  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.234342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.687934  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.688868  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-2.158dcf64a2b9b73c: (2.077778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43844]
I0320 23:44:37.689283  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:37.689578  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.31981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43838]
I0320 23:44:37.691686  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.710671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43842]
I0320 23:44:37.691861  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:37.691892  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:37.691976  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.692020  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.693079  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:37.693099  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:37.693088  106048 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync
I0320 23:44:37.694944  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.545665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43846]
I0320 23:44:37.695470  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (2.839577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43848]
I0320 23:44:37.698104  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6/status: (5.679231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43816]
I0320 23:44:37.699466  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (994.824µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43848]
I0320 23:44:37.699658  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.699703  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (7.191281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43842]
I0320 23:44:37.699794  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:37.699812  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3
I0320 23:44:37.699907  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.699955  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.702413  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.277839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43848]
I0320 23:44:37.702774  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (2.408998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I0320 23:44:37.703625  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-3.158dcf64a32e5617: (2.431697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0320 23:44:37.705460  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (4.801673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43846]
I0320 23:44:37.705651  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.313093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43848]
I0320 23:44:37.705733  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.705837  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:37.705849  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4
I0320 23:44:37.705922  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.705975  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.707303  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.143456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I0320 23:44:37.707743  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.338651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0320 23:44:37.708355  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.708519  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:37.708530  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5
I0320 23:44:37.708594  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.708630  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.709193  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-4.158dcf64a3b55292: (2.527948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43854]
I0320 23:44:37.711513  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.246394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0320 23:44:37.713021  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-5.158dcf64a49a516a: (3.243535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43854]
I0320 23:44:37.714317  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (5.48762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I0320 23:44:37.714325  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.293038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0320 23:44:37.714789  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (5.551797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43856]
I0320 23:44:37.714831  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.715682  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:37.715742  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:37.715854  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.715928  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.718747  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7/status: (2.409414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I0320 23:44:37.719103  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (2.412152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43854]
I0320 23:44:37.719488  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.552412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0320 23:44:37.722723  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.783199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43850]
I0320 23:44:37.722736  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (5.57986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43858]
I0320 23:44:37.723200  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.219665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43852]
I0320 23:44:37.723455  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.723745  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:37.723766  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8
I0320 23:44:37.723838  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.723883  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.725191  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.531533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43858]
I0320 23:44:37.727165  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (2.613682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.728626  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.9319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43862]
I0320 23:44:37.728971  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.179346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43858]
I0320 23:44:37.729078  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8/status: (4.974789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43854]
I0320 23:44:37.730778  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.373492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43862]
I0320 23:44:37.731953  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (2.537066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.732295  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.732462  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:37.732479  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6
I0320 23:44:37.732591  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.498371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43862]
I0320 23:44:37.732747  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.732787  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.734957  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (1.056413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43864]
I0320 23:44:37.735569  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (2.377468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43862]
I0320 23:44:37.735794  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-6.158dcf64a570e97b: (2.272983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43866]
I0320 23:44:37.735801  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.735915  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:37.735935  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9
I0320 23:44:37.736012  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.736089  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.736092  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.189031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.737958  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.702513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43862]
I0320 23:44:37.738467  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.766745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.739013  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9/status: (2.703548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43864]
I0320 23:44:37.739409  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.041895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43868]
I0320 23:44:37.741245  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.731928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.741489  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.741644  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.616794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43868]
I0320 23:44:37.741653  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:37.741724  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10
I0320 23:44:37.741822  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.741868  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.743491  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.248505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43868]
I0320 23:44:37.744734  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.542174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43870]
I0320 23:44:37.744945  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.53675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43862]
I0320 23:44:37.745484  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10/status: (3.428597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.747465  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.549086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.747522  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.13632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43870]
I0320 23:44:37.747655  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.747873  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:37.747892  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11
I0320 23:44:37.747967  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.748006  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.749667  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.022648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.749838  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.911953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.750460  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.341037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43874]
I0320 23:44:37.751003  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11/status: (2.790871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43868]
I0320 23:44:37.751648  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.386977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.752470  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.111737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.752826  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.753388  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.36903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43860]
I0320 23:44:37.753783  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:37.753801  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12
I0320 23:44:37.753874  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.753910  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.755685  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.106913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43876]
I0320 23:44:37.755995  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.186346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.756149  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.531381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.756782  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12/status: (2.378062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43874]
I0320 23:44:37.758308  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (1.126963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.758607  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.758743  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:37.759262  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13
I0320 23:44:37.759371  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.759432  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.759806  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (3.357435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.761974  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.924165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.762189  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13/status: (2.261086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.762313  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.954778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43880]
I0320 23:44:37.762432  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (2.640496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43876]
I0320 23:44:37.763454  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (976.161µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.763706  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.763876  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:37.763905  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14
I0320 23:44:37.764030  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.764109  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.765558  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.631407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43880]
I0320 23:44:37.766646  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (2.353651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.766738  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.965708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43882]
I0320 23:44:37.766991  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14/status: (2.425921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.768738  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.428468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.769338  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.425917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.769676  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.769881  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:37.769892  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15
I0320 23:44:37.769976  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.770087  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.771408  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.129936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.772861  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.239751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.773156  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.144427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43878]
I0320 23:44:37.774404  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15/status: (2.13766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43880]
I0320 23:44:37.775544  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.839673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.776963  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.274231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43880]
I0320 23:44:37.777376  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.777503  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:37.777519  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16
I0320 23:44:37.777578  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.777632  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.590137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.777644  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.780266  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (2.476149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43880]
I0320 23:44:37.780651  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16/status: (2.614874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.780718  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (2.363782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43886]
I0320 23:44:37.781485  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.285138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.782254  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.267705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.782730  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.782890  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:37.782924  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17
I0320 23:44:37.783404  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.783462  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.785360  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.424439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.785683  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.562467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43888]
I0320 23:44:37.786088  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (4.938252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43880]
I0320 23:44:37.787751  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17/status: (2.14653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.788588  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods: (1.996029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43888]
I0320 23:44:37.789299  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-17: (1.260178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43884]
I0320 23:44:37.789601  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.789715  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:37.789734  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18
I0320 23:44:37.789799  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.789838  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.791859  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.332063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.792441  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18/status: (2.195365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43888]
I0320 23:44:37.794251  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (1.398948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43888]
I0320 23:44:37.794398  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-18: (2.025244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.794473  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.794656  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:37.794671  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19
I0320 23:44:37.794865  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.794923  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.796726  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (987.239µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43888]
I0320 23:44:37.797390  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.374035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43890]
I0320 23:44:37.798173  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19/status: (2.384638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43872]
I0320 23:44:37.799893  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-19: (1.149323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43890]
I0320 23:44:37.800377  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.800528  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:37.800541  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:37.800607  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.800640  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.802496  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20/status: (1.63817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43890]
I0320 23:44:37.802740  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.463353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43894]
I0320 23:44:37.803901  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.106473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43890]
I0320 23:44:37.803990  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (2.256957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43888]
I0320 23:44:37.804146  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.804318  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:37.804336  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21
I0320 23:44:37.804443  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.804479  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.805826  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (986.881µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43894]
I0320 23:44:37.806840  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.725937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43896]
I0320 23:44:37.807514  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21/status: (2.805182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43890]
I0320 23:44:37.808922  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-21: (1.020409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43894]
I0320 23:44:37.809179  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.809408  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:37.809434  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:37.809516  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.809552  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.811719  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22/status: (1.934445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43894]
I0320 23:44:37.812579  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (2.732588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43892]
I0320 23:44:37.813305  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.272224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43894]
I0320 23:44:37.813749  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.757052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43898]
I0320 23:44:37.814194  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.814309  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:37.814324  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:37.814386  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.814485  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.816319  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23/status: (1.606428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43898]
I0320 23:44:37.817210  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.960211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43900]
I0320 23:44:37.817271  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (2.259301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43892]
I0320 23:44:37.818287  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.169293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43898]
I0320 23:44:37.818717  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.818889  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:37.818918  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24
I0320 23:44:37.818994  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.819179  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.820268  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.023161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43892]
I0320 23:44:37.820756  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.349517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43900]
I0320 23:44:37.821768  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24/status: (2.200648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43902]
I0320 23:44:37.823257  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-24: (1.061591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43900]
I0320 23:44:37.823492  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.823655  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:37.823681  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:37.823796  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.823862  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.826603  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25/status: (2.435008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43900]
I0320 23:44:37.826848  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.237867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43904]
I0320 23:44:37.827322  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.322565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43892]
I0320 23:44:37.828158  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.187485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43900]
I0320 23:44:37.828394  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.828545  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:37.828561  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7
I0320 23:44:37.828720  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.828769  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.829990  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.072224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43892]
I0320 23:44:37.830233  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.830373  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:37.830403  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26
I0320 23:44:37.830501  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.830550  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.831502  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (2.568686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43904]
I0320 23:44:37.832453  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-7.158dcf64a6ddae0e: (3.096543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43906]
I0320 23:44:37.833134  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.887562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43908]
I0320 23:44:37.833804  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26/status: (3.049061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43892]
I0320 23:44:37.835031  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.0559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43906]
I0320 23:44:37.835457  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-26: (1.089011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43908]
I0320 23:44:37.835714  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.835852  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:37.835867  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27
I0320 23:44:37.836024  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.836081  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.838509  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.593595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43910]
I0320 23:44:37.839154  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (2.554441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43906]
I0320 23:44:37.839783  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27/status: (3.201607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43904]
I0320 23:44:37.841253  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-27: (1.0115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43906]
I0320 23:44:37.841582  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.841711  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:37.841724  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28
I0320 23:44:37.841798  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.841840  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.843440  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.151254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.844105  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28/status: (1.990286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43906]
I0320 23:44:37.845086  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (2.967531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43910]
I0320 23:44:37.845658  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-28: (936.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43906]
I0320 23:44:37.845907  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.846032  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:37.846046  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29
I0320 23:44:37.846125  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.846161  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.847845  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.134625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.848376  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29/status: (1.991882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43910]
I0320 23:44:37.848931  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.203759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43914]
I0320 23:44:37.849925  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-29: (1.079512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43910]
I0320 23:44:37.850294  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.850439  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:37.850466  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30
I0320 23:44:37.850593  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.850752  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.853879  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (2.507289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.853975  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30/status: (2.141897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43914]
I0320 23:44:37.854458  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.605582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0320 23:44:37.855808  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-30: (978.694µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43914]
I0320 23:44:37.856104  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.856234  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:37.856249  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31
I0320 23:44:37.856328  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.856369  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.857623  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.032012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.858370  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.443516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43918]
I0320 23:44:37.858942  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31/status: (2.230606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0320 23:44:37.860514  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-31: (1.178884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0320 23:44:37.860816  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.860948  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:37.860962  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32
I0320 23:44:37.861024  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.861078  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.862392  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (998.123µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.863771  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.062022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43920]
I0320 23:44:37.864960  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32/status: (2.05012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43916]
I0320 23:44:37.866405  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-32: (991.986µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43920]
I0320 23:44:37.866627  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.866748  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:37.866762  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:37.866820  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.866856  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.869205  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (2.014335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.869472  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33/status: (2.418425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43920]
I0320 23:44:37.870117  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.251009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43922]
I0320 23:44:37.871384  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (931.199µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43920]
I0320 23:44:37.871632  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.871757  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:37.871772  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34
I0320 23:44:37.871846  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.871888  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.873931  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.545756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.874468  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.315674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.874939  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34/status: (2.543993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43922]
I0320 23:44:37.876549  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-34: (1.044598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.876752  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.876887  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:37.876901  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35
I0320 23:44:37.876983  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.877028  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.879101  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.569986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.879403  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35/status: (2.152596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.880696  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.394739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.880860  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-35: (1.152161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.881187  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.881310  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:37.881326  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36
I0320 23:44:37.881392  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.881443  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.883092  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.229447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.883935  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36/status: (1.931518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.885945  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-36: (1.165284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.886896  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.887083  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:37.887104  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37
I0320 23:44:37.887133  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.253046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43928]
I0320 23:44:37.887244  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.887283  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.890039  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (2.485809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.890151  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.806376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I0320 23:44:37.891231  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.670988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43994]
I0320 23:44:37.891856  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37/status: (4.237082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43924]
I0320 23:44:37.893644  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-37: (1.2647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I0320 23:44:37.893856  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.894045  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:37.894075  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:37.894147  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.894193  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.896108  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38/status: (1.684331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I0320 23:44:37.896456  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.553478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.896781  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (2.049248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43912]
I0320 23:44:37.898277  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.116159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I0320 23:44:37.898518  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.898738  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:37.898754  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:37.898934  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.898978  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.900641  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.194651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.901331  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39/status: (2.138801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I0320 23:44:37.902302  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.328548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.903354  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.470954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43930]
I0320 23:44:37.903599  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.903754  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:37.903768  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40
I0320 23:44:37.903831  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.903870  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.905479  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.082219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.905854  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.393322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44000]
I0320 23:44:37.907536  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40/status: (3.069133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43998]
I0320 23:44:37.909167  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-40: (1.126809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44000]
I0320 23:44:37.909438  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.909598  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:37.909613  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41
I0320 23:44:37.909684  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.909723  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.911296  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.25966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.912619  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41/status: (2.405026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44000]
I0320 23:44:37.912628  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.583369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44002]
I0320 23:44:37.913968  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-41: (1.013049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44000]
I0320 23:44:37.914211  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.914350  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:37.914365  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42
I0320 23:44:37.914452  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.914506  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.915867  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (891.729µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.916402  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42/status: (1.418832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44000]
I0320 23:44:37.917361  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (2.33213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44004]
I0320 23:44:37.917891  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-42: (1.163918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44000]
I0320 23:44:37.918174  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.918316  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:37.918334  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43
I0320 23:44:37.918414  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.918535  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.919856  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (905.857µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44004]
I0320 23:44:37.920828  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.818655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44006]
I0320 23:44:37.921118  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43/status: (2.24262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43996]
I0320 23:44:37.922764  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-43: (1.173837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44006]
I0320 23:44:37.923510  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.923660  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:37.923689  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:37.923802  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.923855  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.925875  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44/status: (1.652025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44006]
I0320 23:44:37.926407  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.758648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.927291  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.190433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44004]
I0320 23:44:37.927417  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.105253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44006]
I0320 23:44:37.927687  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.927808  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:37.927822  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45
I0320 23:44:37.927918  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.927964  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.929393  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (988.575µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.929728  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45/status: (1.365928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44006]
I0320 23:44:37.931554  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.898047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44014]
I0320 23:44:37.932334  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-45: (2.209787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44006]
I0320 23:44:37.932882  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.933195  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:37.933228  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:37.933303  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.933377  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.935882  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.780841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44018]
I0320 23:44:37.936367  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46/status: (2.747167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44014]
I0320 23:44:37.937485  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (3.400148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.938077  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (953.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44014]
I0320 23:44:37.938500  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.938663  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:37.938695  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47
I0320 23:44:37.938783  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.938835  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.940987  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (1.470177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44022]
I0320 23:44:37.941138  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (1.984558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.942358  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47/status: (2.891505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44018]
I0320 23:44:37.945897  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-47: (3.053767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.946233  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.946453  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:37.946482  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:37.946610  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.946681  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.948318  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.187189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44022]
I0320 23:44:37.949826  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48/status: (2.635841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.951381  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (4.194963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.953453  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (3.090087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44022]
I0320 23:44:37.954010  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.954213  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:37.954228  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49
I0320 23:44:37.954294  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.954332  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.963369  106048 wrap.go:47] POST /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events: (7.545044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.964249  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (9.468718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.964604  106048 wrap.go:47] PUT /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49/status: (10.05836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.966147  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-49: (1.043373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44010]
I0320 23:44:37.966389  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.966560  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:37.966576  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20
I0320 23:44:37.966663  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.966698  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.968246  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.087979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.968360  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-20: (1.504463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.968573  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.968681  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:37.968699  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22
I0320 23:44:37.968756  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.968793  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.969549  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-20.158dcf64abea54b7: (2.136319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44054]
I0320 23:44:37.969972  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (968.645µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.970015  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-22: (1.008069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.970204  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.970362  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:37.970378  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23
I0320 23:44:37.970464  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.970503  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.972132  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.390599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.972546  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-23: (1.798977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.972770  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.972969  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-22.158dcf64ac7252c2: (2.975578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44054]
I0320 23:44:37.973083  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:37.973103  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25
I0320 23:44:37.973176  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.973223  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.974715  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.28814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.974751  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-25: (1.304163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.975090  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.975242  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:37.975255  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33
I0320 23:44:37.975317  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.975351  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.976410  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-23.158dcf64acbd8399: (2.714062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:37.977800  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (2.054619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.978080  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.978775  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-33: (2.691989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44026]
I0320 23:44:37.979288  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:37.979309  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38
I0320 23:44:37.979376  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.979410  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.980093  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-25.158dcf64ad4cac8d: (3.062147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:37.980732  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.174761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.980916  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.981035  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:37.981083  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39
I0320 23:44:37.981250  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.981327  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.981287  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-38: (1.641287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.983141  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.185852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:37.983401  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.983515  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:37.983530  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44
I0320 23:44:37.983588  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.983621  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.984126  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-39: (1.775875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.985950  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-33.158dcf64afdcb743: (4.820165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.986722  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.116696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:37.986940  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.987029  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:37.987039  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46
I0320 23:44:37.987133  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.987171  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.987527  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-44: (1.636247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.988960  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (1.6269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:37.989309  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.989550  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-46: (2.136246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44060]
I0320 23:44:37.989814  106048 scheduling_queue.go:908] About to try and schedule pod preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:37.989830  106048 scheduler.go:453] Attempting to schedule pod: preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48
I0320 23:44:37.989925  106048 factory.go:647] Unable to schedule preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0320 23:44:37.989966  106048 factory.go:742] Updating pod condition for preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0320 23:44:37.991334  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.138713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.991897  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-48: (1.777847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:37.992124  106048 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0320 23:44:37.993395  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/preemptor-pod: (1.508412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.993635  106048 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0320 23:44:37.994443  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-38.158dcf64b17dba46: (7.806981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.995236  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-0: (1.081425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.997137  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-39.158dcf64b1c6da6a: (2.192929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:37.998152  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-1: (2.128612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:37.999737  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-2: (1.16805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.000170  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-44.158dcf64b3427353: (2.447801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:38.001430  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-3: (1.386894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.003536  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-4: (1.627356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.003553  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-46.158dcf64b3d3bb1e: (2.564715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44030]
I0320 23:44:38.005138  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-5: (1.102529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.006603  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-6: (930.415µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.006899  106048 wrap.go:47] PATCH /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/events/ppod-48.158dcf64b49eb60c: (2.73988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44056]
I0320 23:44:38.008515  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-7: (1.30202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.010263  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-8: (1.329457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.011731  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-9: (1.098012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.013466  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-10: (1.298434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.015178  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-11: (1.071152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.016600  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-12: (993.764µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.017988  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-13: (1.04215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.019537  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-14: (1.054045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.022294  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-15: (1.128448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.023756  106048 wrap.go:47] GET /api/v1/namespaces/preemption-race1b289f0c-4b6a-11e9-a867-0242ac110002/pods/ppod-16: (1.080451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44058]
I0320 23:44:38.025258  106048 wrap.go:47] GET /api/v1/namespaces/pre